Dec 27 05:44:36 crc systemd[1]: Starting Kubernetes Kubelet... Dec 27 05:44:36 crc restorecon[4716]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:36 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 27 05:44:37 crc restorecon[4716]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 27 05:44:37 crc kubenswrapper[4760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 27 05:44:37 crc kubenswrapper[4760]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 27 05:44:37 crc kubenswrapper[4760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 27 05:44:37 crc kubenswrapper[4760]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 27 05:44:37 crc kubenswrapper[4760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 27 05:44:37 crc kubenswrapper[4760]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.345254 4760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349663 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349700 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349704 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349710 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349716 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349725 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349729 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349739 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349743 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349750 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349758 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349763 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349766 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349770 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349773 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349777 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349780 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349784 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349787 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349791 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349794 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349799 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349802 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349806 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349809 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349812 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349816 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349819 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349823 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349826 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349830 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349833 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349836 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349842 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349846 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349852 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349855 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349861 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349867 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349872 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349877 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349881 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349885 4760 feature_gate.go:330] unrecognized feature gate: Example Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349889 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349892 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349896 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349899 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349903 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349906 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349910 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349914 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349918 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349921 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349925 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349928 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349932 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349935 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349939 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349946 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349950 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349953 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349956 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349960 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349963 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349967 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349970 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349975 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349981 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349987 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349990 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.349994 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350112 4760 flags.go:64] FLAG: --address="0.0.0.0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350124 4760 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350135 4760 flags.go:64] FLAG: --anonymous-auth="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350142 4760 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350148 4760 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350153 4760 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350161 4760 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350167 4760 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350171 4760 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350175 4760 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350180 4760 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350186 4760 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350191 4760 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350195 4760 flags.go:64] FLAG: --cgroup-root="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350199 4760 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350203 4760 flags.go:64] FLAG: --client-ca-file="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350207 4760 flags.go:64] FLAG: --cloud-config="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350211 4760 flags.go:64] FLAG: --cloud-provider="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350216 4760 flags.go:64] FLAG: --cluster-dns="[]" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350223 4760 flags.go:64] FLAG: --cluster-domain="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350227 4760 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350231 4760 flags.go:64] FLAG: --config-dir="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350236 4760 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350240 4760 flags.go:64] FLAG: --container-log-max-files="5" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350246 4760 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350250 4760 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350255 4760 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350260 4760 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350264 4760 flags.go:64] FLAG: --contention-profiling="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350269 4760 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350273 4760 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350277 4760 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350281 4760 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350288 4760 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350292 4760 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350297 4760 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350301 4760 flags.go:64] FLAG: --enable-load-reader="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350306 4760 flags.go:64] FLAG: --enable-server="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350310 4760 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350316 4760 flags.go:64] FLAG: --event-burst="100" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350321 4760 flags.go:64] FLAG: --event-qps="50" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350325 4760 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350329 4760 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350334 4760 flags.go:64] FLAG: --eviction-hard="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350340 4760 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350344 4760 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350349 4760 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350354 4760 flags.go:64] FLAG: --eviction-soft="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350358 4760 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350362 4760 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350367 4760 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350372 4760 flags.go:64] FLAG: --experimental-mounter-path="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350375 4760 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350379 4760 flags.go:64] FLAG: --fail-swap-on="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350384 4760 flags.go:64] FLAG: --feature-gates="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350389 4760 flags.go:64] FLAG: --file-check-frequency="20s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350393 4760 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350397 4760 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350402 4760 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350406 4760 flags.go:64] FLAG: --healthz-port="10248" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350411 4760 flags.go:64] FLAG: --help="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350415 4760 flags.go:64] FLAG: --hostname-override="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350419 4760 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350423 4760 flags.go:64] FLAG: --http-check-frequency="20s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350427 4760 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350431 4760 flags.go:64] FLAG: --image-credential-provider-config="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350435 4760 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350440 4760 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350444 4760 flags.go:64] FLAG: --image-service-endpoint="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350448 4760 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350452 4760 flags.go:64] FLAG: --kube-api-burst="100" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350456 4760 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350461 4760 flags.go:64] FLAG: --kube-api-qps="50" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350465 4760 flags.go:64] FLAG: --kube-reserved="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350470 4760 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350474 4760 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350479 4760 flags.go:64] FLAG: --kubelet-cgroups="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350483 4760 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350487 4760 flags.go:64] FLAG: --lock-file="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350491 4760 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350496 4760 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350500 4760 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350512 4760 flags.go:64] FLAG: --log-json-split-stream="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350518 4760 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350522 4760 flags.go:64] FLAG: --log-text-split-stream="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350526 4760 flags.go:64] FLAG: --logging-format="text" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350530 4760 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350534 4760 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350539 4760 flags.go:64] FLAG: --manifest-url="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350543 4760 flags.go:64] FLAG: --manifest-url-header="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350550 4760 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350554 4760 flags.go:64] FLAG: --max-open-files="1000000" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350559 4760 flags.go:64] FLAG: --max-pods="110" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350563 4760 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350568 4760 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350572 4760 flags.go:64] FLAG: --memory-manager-policy="None" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350576 4760 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350581 4760 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350585 4760 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350590 4760 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350603 4760 flags.go:64] FLAG: --node-status-max-images="50" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350607 4760 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350612 4760 flags.go:64] FLAG: --oom-score-adj="-999" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350617 4760 flags.go:64] FLAG: --pod-cidr="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350621 4760 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350628 4760 flags.go:64] FLAG: --pod-manifest-path="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350632 4760 flags.go:64] FLAG: --pod-max-pids="-1" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350636 4760 flags.go:64] FLAG: --pods-per-core="0" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350640 4760 flags.go:64] FLAG: --port="10250" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350644 4760 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350648 4760 flags.go:64] FLAG: --provider-id="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350652 4760 flags.go:64] FLAG: --qos-reserved="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350656 4760 flags.go:64] FLAG: --read-only-port="10255" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350661 4760 flags.go:64] FLAG: --register-node="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350665 4760 flags.go:64] FLAG: --register-schedulable="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350669 4760 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350677 4760 flags.go:64] FLAG: --registry-burst="10" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350681 4760 flags.go:64] FLAG: --registry-qps="5" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350685 4760 flags.go:64] FLAG: --reserved-cpus="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350690 4760 flags.go:64] FLAG: --reserved-memory="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350696 4760 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350700 4760 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350704 4760 flags.go:64] FLAG: --rotate-certificates="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350708 4760 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350712 4760 flags.go:64] FLAG: --runonce="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350716 4760 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350721 4760 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350725 4760 flags.go:64] FLAG: --seccomp-default="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350729 4760 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350738 4760 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350742 4760 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350747 4760 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350753 4760 flags.go:64] FLAG: --storage-driver-password="root" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350757 4760 flags.go:64] FLAG: --storage-driver-secure="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350761 4760 flags.go:64] FLAG: --storage-driver-table="stats" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350765 4760 flags.go:64] FLAG: --storage-driver-user="root" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350770 4760 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350774 4760 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350778 4760 flags.go:64] FLAG: --system-cgroups="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350782 4760 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350789 4760 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350793 4760 flags.go:64] FLAG: --tls-cert-file="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350797 4760 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350803 4760 flags.go:64] FLAG: --tls-min-version="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350807 4760 flags.go:64] FLAG: --tls-private-key-file="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350811 4760 flags.go:64] FLAG: --topology-manager-policy="none" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350815 4760 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350819 4760 flags.go:64] FLAG: --topology-manager-scope="container" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350823 4760 flags.go:64] FLAG: --v="2" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350830 4760 flags.go:64] FLAG: --version="false" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350836 4760 flags.go:64] FLAG: --vmodule="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350841 4760 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.350845 4760 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350958 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350964 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350970 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350974 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350978 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350982 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350986 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350991 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350995 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.350999 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351002 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351007 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351011 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351014 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351018 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351021 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351025 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351028 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351032 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351035 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351038 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351042 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351046 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351049 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351053 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351056 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351060 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351064 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351067 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351070 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351074 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351077 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351081 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351084 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351091 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351107 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351111 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351114 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351119 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351122 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351126 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351129 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351133 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351138 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351148 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351152 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351156 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351160 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351164 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351170 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351174 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351179 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351184 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351188 4760 feature_gate.go:330] unrecognized feature gate: Example Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351192 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351196 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351200 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351203 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351208 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351212 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351216 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351219 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351222 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351227 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351232 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351235 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351239 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351243 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351248 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351252 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.351255 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.351271 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.359123 4760 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.359175 4760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359301 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359320 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359327 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359333 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359340 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359346 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359351 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359358 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359364 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359371 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359376 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359382 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359388 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359394 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359399 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359407 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359413 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359418 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359424 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359430 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359436 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359441 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359446 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359451 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359456 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359462 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359467 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359472 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359477 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359483 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359488 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359493 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359499 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359504 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359511 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359516 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359521 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359527 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359532 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359538 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359543 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359549 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359554 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359559 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359566 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359574 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359580 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359586 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359591 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359597 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359602 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359607 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359613 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359619 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359624 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359631 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359638 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359644 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359649 4760 feature_gate.go:330] unrecognized feature gate: Example Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359654 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359661 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359668 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359673 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359680 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359687 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359693 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359700 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359705 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359712 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359718 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359726 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.359735 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359908 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359917 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359923 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359928 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359934 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359940 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359946 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359953 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359958 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359965 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359971 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359978 4760 feature_gate.go:330] unrecognized feature gate: Example Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359987 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.359996 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360003 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360011 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360019 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360025 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360032 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360038 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360045 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360052 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360058 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360064 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360070 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360075 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360081 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360108 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360113 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360120 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360125 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360130 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360135 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360141 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360147 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360152 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360158 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360163 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360168 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360174 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360179 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360184 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360189 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360195 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360200 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360206 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360211 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360216 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360222 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360227 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360232 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360238 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360243 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360249 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360254 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360259 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360265 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360270 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360275 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360280 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360288 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360294 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360300 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360306 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360312 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360317 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360323 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360359 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360365 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360370 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.360376 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.360385 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.360596 4760 server.go:940] "Client rotation is on, will bootstrap in background" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.364196 4760 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.364325 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.365151 4760 server.go:997] "Starting client certificate rotation" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.365189 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.365342 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-07 07:43:39.993021949 +0000 UTC Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.365407 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.371294 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.372466 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.374051 4760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.386800 4760 log.go:25] "Validated CRI v1 runtime API" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.398306 4760 log.go:25] "Validated CRI v1 image API" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.399924 4760 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.403711 4760 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-27-05-40-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.403780 4760 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.431200 4760 manager.go:217] Machine: {Timestamp:2025-12-27 05:44:37.429643796 +0000 UTC m=+0.189713131 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a18cf57b-5a1d-4f23-a965-e04c5441f26a BootID:cab3fba8-5ed3-434a-8f84-cd17705f5a67 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9f:75:8a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9f:75:8a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:32:ae:8b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d5:70:4a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:78:98:aa Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:54:c9:21 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:88:4a:3a:81:57 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5a:9d:a4:57:b1:cf Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.431538 4760 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.431760 4760 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432127 4760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432361 4760 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432415 4760 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432733 4760 topology_manager.go:138] "Creating topology manager with none policy" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432748 4760 container_manager_linux.go:303] "Creating device plugin manager" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432941 4760 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.432990 4760 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.433346 4760 state_mem.go:36] "Initialized new in-memory state store" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.433451 4760 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.434491 4760 kubelet.go:418] "Attempting to sync node with API server" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.434520 4760 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.434564 4760 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.434583 4760 kubelet.go:324] "Adding apiserver pod source" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.434604 4760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.436562 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.436630 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.436766 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.436779 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.437063 4760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.437613 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.438527 4760 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439215 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439358 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439452 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439529 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439629 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439713 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439803 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439893 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.439997 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.440083 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.440182 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.440268 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.440552 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.441426 4760 server.go:1280] "Started kubelet" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.441524 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.442018 4760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.442165 4760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 27 05:44:37 crc systemd[1]: Started Kubernetes Kubelet. Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.445136 4760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.444772 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1884fc396efb4ac6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-27 05:44:37.441374918 +0000 UTC m=+0.201444243,LastTimestamp:2025-12-27 05:44:37.441374918 +0000 UTC m=+0.201444243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.447489 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.447603 4760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.450510 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:18:05.447521932 +0000 UTC Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.450664 4760 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.450674 4760 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.450841 4760 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.451084 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.451742 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.451858 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.451931 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.453481 4760 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.453521 4760 factory.go:55] Registering systemd factory Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.453537 4760 factory.go:221] Registration of the systemd container factory successfully Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.455629 4760 factory.go:153] Registering CRI-O factory Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.456609 4760 factory.go:221] Registration of the crio container factory successfully Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.456720 4760 factory.go:103] Registering Raw factory Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.456577 4760 server.go:460] "Adding debug handlers to kubelet server" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.456926 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.456971 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.456988 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457001 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457013 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457025 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457118 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457130 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457144 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457156 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457174 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457184 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457195 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457210 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457252 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457263 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457274 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457290 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457318 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457331 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457342 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457354 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457364 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457375 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457388 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457400 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457414 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457454 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457465 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457477 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457487 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457502 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457513 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457523 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457534 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457545 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457555 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457568 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457578 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457597 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457608 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457622 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457632 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457642 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457654 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457672 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457683 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457692 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457703 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457713 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457723 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457737 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457752 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457765 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457777 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457788 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457799 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457810 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457844 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457856 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457866 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457911 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457923 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457933 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457944 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457954 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457963 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457973 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457986 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.457996 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458006 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458020 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458034 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458049 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458058 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458069 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458079 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458108 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458119 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458129 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458139 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458153 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458163 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458174 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458183 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458194 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458204 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458215 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458225 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458235 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458244 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458255 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458266 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458277 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458286 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458295 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458305 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458317 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458326 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458338 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458339 4760 manager.go:1196] Started watching for new ooms in manager Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458350 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458467 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458494 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458511 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458539 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458558 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458575 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458591 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458606 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458621 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458636 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458650 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458667 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458682 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458698 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458711 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458725 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458738 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458752 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458764 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458779 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458794 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458809 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458827 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458842 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458856 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458872 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458888 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458903 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458918 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458934 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458954 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458968 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458982 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.458997 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459010 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459023 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459037 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459245 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459269 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459294 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459308 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459318 4760 manager.go:319] Starting recovery of all containers Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.459322 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462841 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462859 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462872 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462885 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462900 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462912 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462925 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462937 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462949 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462960 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462972 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462984 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.462996 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463007 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463019 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463032 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463044 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463055 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463069 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463108 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463122 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463134 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463146 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463159 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463171 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463183 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463195 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463207 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463218 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463231 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463244 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463255 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463267 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463280 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463804 4760 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463843 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463860 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463898 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.463912 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466392 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466424 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466441 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466465 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466480 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466521 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466536 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466550 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466570 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466600 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466619 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466637 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466678 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466698 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466713 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466767 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466783 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466796 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466814 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466827 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466865 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466917 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466931 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466949 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.466962 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.467001 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.467014 4760 reconstruct.go:97] "Volume reconstruction finished" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.467023 4760 reconciler.go:26] "Reconciler: start to sync state" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.492196 4760 manager.go:324] Recovery completed Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.498801 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.501268 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.501318 4760 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.501351 4760 kubelet.go:2335] "Starting kubelet main sync loop" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.501406 4760 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 27 05:44:37 crc kubenswrapper[4760]: W1227 05:44:37.502682 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.502958 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.506122 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.507550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.507582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.507594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.508357 4760 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.508378 4760 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 27 05:44:37 crc kubenswrapper[4760]: I1227 05:44:37.508408 4760 state_mem.go:36] "Initialized new in-memory state store" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.551760 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.601720 4760 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.617742 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1884fc396efb4ac6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-27 05:44:37.441374918 +0000 UTC m=+0.201444243,LastTimestamp:2025-12-27 05:44:37.441374918 +0000 UTC m=+0.201444243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.652223 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.652647 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.752884 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.802767 4760 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.853294 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:37 crc kubenswrapper[4760]: E1227 05:44:37.954292 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.054048 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.055116 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.155493 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.203252 4760 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.255838 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.292926 4760 policy_none.go:49] "None policy: Start" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.294502 4760 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.294553 4760 state_mem.go:35] "Initializing new in-memory state store" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.355945 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.414934 4760 manager.go:334] "Starting Device Plugin manager" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.415009 4760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.415024 4760 server.go:79] "Starting device plugin registration server" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.415698 4760 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.415721 4760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.415993 4760 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.416283 4760 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.416316 4760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.426806 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.444125 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.451295 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:35:31.087596057 +0000 UTC Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.516325 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.517457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.517554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.517572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.517617 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.518529 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Dec 27 05:44:38 crc kubenswrapper[4760]: W1227 05:44:38.521178 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.521324 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:38 crc kubenswrapper[4760]: W1227 05:44:38.649236 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.649320 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.719814 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.721905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.721946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.721954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:38 crc kubenswrapper[4760]: I1227 05:44:38.721982 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.722657 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Dec 27 05:44:38 crc kubenswrapper[4760]: W1227 05:44:38.739796 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.739940 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:38 crc kubenswrapper[4760]: W1227 05:44:38.751000 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.751047 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:38 crc kubenswrapper[4760]: E1227 05:44:38.855723 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.003745 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.003975 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.006478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.006552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.006572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.006790 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.007214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.007295 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.008546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.008597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.008610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.008799 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.008962 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009020 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.009957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.010238 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.010357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.010403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.010445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.010497 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.010450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011769 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.011960 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012000 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012829 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012854 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.012965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.013749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.013782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.013795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090796 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090822 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.090948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091192 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091223 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091289 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091318 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091357 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.091387 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.123322 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.124532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.124573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.124586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.124616 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:44:39 crc kubenswrapper[4760]: E1227 05:44:39.125006 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.192984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193049 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193206 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193244 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193322 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193328 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193354 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193325 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193287 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193423 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193466 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193297 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193570 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193627 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193725 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.193930 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.349886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.359060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: W1227 05:44:39.376121 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-63f2844306ba3705def44b204422064460311029bc04b760529d06e4e3e77626 WatchSource:0}: Error finding container 63f2844306ba3705def44b204422064460311029bc04b760529d06e4e3e77626: Status 404 returned error can't find the container with id 63f2844306ba3705def44b204422064460311029bc04b760529d06e4e3e77626 Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.403054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.428320 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.441358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.444669 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.451880 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:03:29.241432823 +0000 UTC Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.507904 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f8dd6bf331412c9c44b8569aef16937ce9d2cb40433e1c6952d72d76d6d3fbe7"} Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.509377 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"63f2844306ba3705def44b204422064460311029bc04b760529d06e4e3e77626"} Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.534824 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 27 05:44:39 crc kubenswrapper[4760]: E1227 05:44:39.535846 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.925265 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.927105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.927144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.927153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:39 crc kubenswrapper[4760]: I1227 05:44:39.927179 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:44:39 crc kubenswrapper[4760]: E1227 05:44:39.927696 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Dec 27 05:44:40 crc kubenswrapper[4760]: I1227 05:44:40.444014 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:40 crc kubenswrapper[4760]: I1227 05:44:40.452341 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:21:19.451304365 +0000 UTC Dec 27 05:44:40 crc kubenswrapper[4760]: I1227 05:44:40.452374 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 91h36m38.998934318s for next certificate rotation Dec 27 05:44:40 crc kubenswrapper[4760]: E1227 05:44:40.457280 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Dec 27 05:44:40 crc kubenswrapper[4760]: W1227 05:44:40.536033 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-50f0a08cf01416a0983cc3de375911e3ffd34133e1821617b2a9153e9233819e WatchSource:0}: Error finding container 50f0a08cf01416a0983cc3de375911e3ffd34133e1821617b2a9153e9233819e: Status 404 returned error can't find the container with id 50f0a08cf01416a0983cc3de375911e3ffd34133e1821617b2a9153e9233819e Dec 27 05:44:40 crc kubenswrapper[4760]: W1227 05:44:40.539087 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c0c381ea14eb2588979b93b36f78093217788482972a4e91f3447c622e07c8a7 WatchSource:0}: Error finding container c0c381ea14eb2588979b93b36f78093217788482972a4e91f3447c622e07c8a7: Status 404 returned error can't find the container with id c0c381ea14eb2588979b93b36f78093217788482972a4e91f3447c622e07c8a7 Dec 27 05:44:40 crc kubenswrapper[4760]: W1227 05:44:40.542635 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4831ccc61772cbc169773f0aab75310ea1a1df0fafea2b51a2de8b6e0cade4b3 WatchSource:0}: Error finding container 4831ccc61772cbc169773f0aab75310ea1a1df0fafea2b51a2de8b6e0cade4b3: Status 404 returned error can't find the container with id 4831ccc61772cbc169773f0aab75310ea1a1df0fafea2b51a2de8b6e0cade4b3 Dec 27 05:44:40 crc kubenswrapper[4760]: W1227 05:44:40.694629 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:40 crc kubenswrapper[4760]: E1227 05:44:40.694719 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.444493 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.515494 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9a60849e2f000453d05d2dfd05d8e92d22d3d608f9011ca24ab364ce287d801a" exitCode=0 Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.515561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9a60849e2f000453d05d2dfd05d8e92d22d3d608f9011ca24ab364ce287d801a"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.515592 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.516826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.516861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.516873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.518998 4760 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86" exitCode=0 Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.519058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.519199 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.525447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.525476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.525522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.527782 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530527 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530546 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3" exitCode=0 Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530598 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4831ccc61772cbc169773f0aab75310ea1a1df0fafea2b51a2de8b6e0cade4b3"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.530684 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:41 crc kubenswrapper[4760]: E1227 05:44:41.530989 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.531430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.531503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.531523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.536302 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.536322 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.536332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.536343 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0c381ea14eb2588979b93b36f78093217788482972a4e91f3447c622e07c8a7"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.538836 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="459e67d58c186ffd55298cb1b2e51297470c5fe04c4666f5fc6fd092c0a644df" exitCode=0 Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.538893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"459e67d58c186ffd55298cb1b2e51297470c5fe04c4666f5fc6fd092c0a644df"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.538935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"50f0a08cf01416a0983cc3de375911e3ffd34133e1821617b2a9153e9233819e"} Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.539120 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.540215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.540242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.540251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.547599 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.554409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.554452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:41 crc kubenswrapper[4760]: I1227 05:44:41.554461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:41 crc kubenswrapper[4760]: W1227 05:44:41.592909 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:41 crc kubenswrapper[4760]: E1227 05:44:41.592989 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:41 crc kubenswrapper[4760]: W1227 05:44:41.736070 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:41 crc kubenswrapper[4760]: E1227 05:44:41.736174 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:41 crc kubenswrapper[4760]: W1227 05:44:41.848449 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:41 crc kubenswrapper[4760]: E1227 05:44:41.848528 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Dec 27 05:44:42 crc kubenswrapper[4760]: I1227 05:44:42.444348 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Dec 27 05:44:42 crc kubenswrapper[4760]: I1227 05:44:42.544794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.549738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3b7880e77761692416d9d8d5482860e21c594b83f6627009f83939b576998d9d"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.549779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0ba024cf540fd521be6ad22ee118d4fcdc7e38642d2eac86bb0d78aec3b6658e"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.552862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.552914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.554867 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9f724bda155b50fae94fb410443a08f6f62a4b04274d9a2d0da7611b46290789" exitCode=0 Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.554914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9f724bda155b50fae94fb410443a08f6f62a4b04274d9a2d0da7611b46290789"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.555188 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.556340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.556369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.556388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.556761 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3055e0b3cf1df70f27bd4594474f908bf1d255cf2d8bee6999d5fe3a01f9805f"} Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.556784 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.556829 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.557806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.557860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.557882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.558192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.558300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.558331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.740594 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 27 05:44:43 crc kubenswrapper[4760]: I1227 05:44:43.909463 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.560824 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.560891 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.562540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.562551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.562586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.562647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.562616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.562811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.731469 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.734931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.735018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.735042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:44 crc kubenswrapper[4760]: I1227 05:44:44.735125 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.572400 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="009fb0189fe8356f6f5a9067737ecdef07e7b2478b614eb21e713e3c68520adf" exitCode=0 Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.572472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"009fb0189fe8356f6f5a9067737ecdef07e7b2478b614eb21e713e3c68520adf"} Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.572588 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.573329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.573365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.573417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.575891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d74de918429279895f641a9427303c2cdc7fcf4b2c010a4544e6fbe784ba9b60"} Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.575987 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.577022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.577055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.577067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.579513 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316"} Dec 27 05:44:45 crc kubenswrapper[4760]: I1227 05:44:45.579563 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f"} Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.588649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c"} Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.588819 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.590380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.590426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.590451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.595225 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.595283 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.595858 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1a0bb0b99455be3f6cd75ca789f9630961cc39b9bb4fb6e5f79b8af7accc89b3"} Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.595908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9e5b023d4eadbdae6e3bd09818cd37d18044e7a79ad35867d00422e1669b1592"} Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.595928 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1e3b19b9c58bbc3e3a3ad24936ff80a5815f8cfe5af99c48e2545fbec9af5199"} Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.596491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.596533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:46 crc kubenswrapper[4760]: I1227 05:44:46.596552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.282402 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.282630 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.284072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.284122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.284132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.605879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"495afde63a4df794b859b09ebf1543ea09973a5b684812f83f894496fa18f3d7"} Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.605989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5484fadbffbdaeab7fb8cc1b01f0b059ee481af9a44850bcb3a755f9fdb332f4"} Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.606027 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.606121 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.606007 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.607859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.607928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.607953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.608284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.608373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.608413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.626032 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.626312 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.627817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.627868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.627893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:47 crc kubenswrapper[4760]: I1227 05:44:47.648062 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.132309 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.132610 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.134456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.134511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.134528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.140164 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:48 crc kubenswrapper[4760]: E1227 05:44:48.427228 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.609269 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.609816 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.610631 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.614913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.614979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.615004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.615236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.615294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.615313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.616832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.616883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:48 crc kubenswrapper[4760]: I1227 05:44:48.616905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.391388 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.612534 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.612595 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.614275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.614343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.614375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.614284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.614520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.614551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:49 crc kubenswrapper[4760]: I1227 05:44:49.819724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.283744 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.283833 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.300381 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.614264 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.614432 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.615313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.615417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.615441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.615836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.615909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:50 crc kubenswrapper[4760]: I1227 05:44:50.615934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.066382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.066897 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.068719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.068793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.068815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.072652 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.444812 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.622154 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.623476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.623543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:53 crc kubenswrapper[4760]: I1227 05:44:53.623562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:53 crc kubenswrapper[4760]: E1227 05:44:53.658699 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 27 05:44:53 crc kubenswrapper[4760]: E1227 05:44:53.741878 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 27 05:44:54 crc kubenswrapper[4760]: W1227 05:44:54.221065 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 27 05:44:54 crc kubenswrapper[4760]: I1227 05:44:54.221259 4760 trace.go:236] Trace[1271184493]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Dec-2025 05:44:44.219) (total time: 10001ms): Dec 27 05:44:54 crc kubenswrapper[4760]: Trace[1271184493]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:44:54.221) Dec 27 05:44:54 crc kubenswrapper[4760]: Trace[1271184493]: [10.001307855s] [10.001307855s] END Dec 27 05:44:54 crc kubenswrapper[4760]: E1227 05:44:54.221339 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 27 05:44:54 crc kubenswrapper[4760]: E1227 05:44:54.737256 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 27 05:44:55 crc kubenswrapper[4760]: W1227 05:44:55.414141 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 27 05:44:55 crc kubenswrapper[4760]: I1227 05:44:55.414229 4760 trace.go:236] Trace[858610809]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Dec-2025 05:44:45.412) (total time: 10001ms): Dec 27 05:44:55 crc kubenswrapper[4760]: Trace[858610809]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:44:55.414) Dec 27 05:44:55 crc kubenswrapper[4760]: Trace[858610809]: [10.001854797s] [10.001854797s] END Dec 27 05:44:55 crc kubenswrapper[4760]: E1227 05:44:55.414254 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 27 05:44:56 crc kubenswrapper[4760]: I1227 05:44:56.266595 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 27 05:44:56 crc kubenswrapper[4760]: I1227 05:44:56.266651 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 27 05:44:56 crc kubenswrapper[4760]: I1227 05:44:56.273181 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 27 05:44:56 crc kubenswrapper[4760]: I1227 05:44:56.273477 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 27 05:44:58 crc kubenswrapper[4760]: E1227 05:44:58.427333 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.399476 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.399750 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.402203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.402251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.402267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.407716 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.654688 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.656045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.656122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:44:59 crc kubenswrapper[4760]: I1227 05:44:59.656135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.283527 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.283638 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.329150 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.329425 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.330917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.331055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.331170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.346298 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.657318 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.659567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.659615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:00 crc kubenswrapper[4760]: I1227 05:45:00.659635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.138179 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.139910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.139970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.139990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.140034 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.146665 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.261002 4760 trace.go:236] Trace[1983110408]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Dec-2025 05:44:47.496) (total time: 13764ms): Dec 27 05:45:01 crc kubenswrapper[4760]: Trace[1983110408]: ---"Objects listed" error: 13764ms (05:45:01.260) Dec 27 05:45:01 crc kubenswrapper[4760]: Trace[1983110408]: [13.764171606s] [13.764171606s] END Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.261132 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.262985 4760 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.278569 4760 trace.go:236] Trace[2123264179]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Dec-2025 05:44:47.408) (total time: 13870ms): Dec 27 05:45:01 crc kubenswrapper[4760]: Trace[2123264179]: ---"Objects listed" error: 13870ms (05:45:01.278) Dec 27 05:45:01 crc kubenswrapper[4760]: Trace[2123264179]: [13.870391878s] [13.870391878s] END Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.278625 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.448290 4760 apiserver.go:52] "Watching apiserver" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.451217 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.451491 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.451811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.451888 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.452106 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.452385 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.452480 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.452575 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.452674 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.452729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.452860 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.453546 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.453538 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.454154 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.454878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.454879 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.456334 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.458301 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.458411 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.458499 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.481172 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.496279 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.504360 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.521163 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.530290 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.539367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.548922 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.550793 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49268->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.550919 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49268->192.168.126.11:17697: read: connection reset by peer" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.550796 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49252->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.551362 4760 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.551434 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.551499 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.551421 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49252->192.168.126.11:17697: read: connection reset by peer" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565406 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565427 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565509 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565558 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565587 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565632 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565646 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565660 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565677 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565716 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565792 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565807 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565973 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.565998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566298 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566346 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566412 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566474 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566517 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566540 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566565 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566584 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566628 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566652 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566673 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566696 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566716 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566737 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566808 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566831 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566897 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.566981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567034 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567054 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567077 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567118 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567186 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567214 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567237 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567290 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567311 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567333 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567371 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567396 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567483 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567532 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567609 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567629 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567732 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567734 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567755 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567779 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567777 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567796 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567831 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567848 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567867 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567904 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567972 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.567990 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568023 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568027 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568039 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568055 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568071 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568104 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568123 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568158 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568175 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568190 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568207 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568223 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568241 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568259 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568275 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568281 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568309 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568327 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568344 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568379 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568398 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568413 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568428 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568489 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568514 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568571 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568580 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568650 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568760 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568839 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568872 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568904 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568937 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.568972 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569042 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569221 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569257 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569293 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569322 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569327 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569411 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569437 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569448 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569466 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569465 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569486 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569522 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569538 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569554 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569612 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569629 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569646 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569654 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569684 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569702 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569737 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569754 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569800 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569841 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569876 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569913 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569929 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569947 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569964 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569983 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570001 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570018 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570035 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570053 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570071 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570101 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570119 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570138 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570155 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570173 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570211 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570228 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570246 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570301 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570320 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570411 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570451 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570506 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570566 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570584 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570726 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570738 4760 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570771 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570782 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570794 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570805 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570817 4760 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570827 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570837 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570846 4760 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570855 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570865 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570874 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570885 4760 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570895 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570904 4760 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570913 4760 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570923 4760 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570933 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570943 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570952 4760 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570961 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570975 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.614685 4760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569755 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569900 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569969 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.569965 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.625260 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.625457 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.570468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.572552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.572643 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.575302 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.575482 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.575611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.575855 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.576015 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.576061 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.576069 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.576270 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.577159 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.577408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.577739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.578552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.581721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.586858 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.587245 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.593491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.596068 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.596186 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.596299 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.597301 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.601306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.602290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.602306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.602658 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.602822 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.602931 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.602949 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603008 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603182 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603332 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603590 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603934 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.603996 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.604155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.604358 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.604397 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.604485 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.604768 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.605029 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.605491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.606031 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.606136 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.606153 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.608185 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.608307 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.608399 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.608646 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.608651 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.608898 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.609293 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.609627 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.610273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.610889 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.611229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.613273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.613656 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.613825 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.617191 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.617532 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.617545 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.617788 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.617912 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.618600 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.614691 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.619407 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.619500 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.619785 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.619871 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620162 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620413 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620480 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620623 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620750 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620774 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.620929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621242 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621291 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621667 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.621868 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.622230 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.622520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.622721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.622950 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.623305 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.623375 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.623542 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.623698 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.624766 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.624982 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.625652 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.625747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.625839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.626305 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:02.126283167 +0000 UTC m=+24.886352502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.626496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.626868 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.627116 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.628312 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.628952 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.628958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.629016 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.630745 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.630844 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.630861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.636682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.636852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.636973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.636994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637144 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637279 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637293 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637338 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.637563 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637577 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.637616 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:02.137603197 +0000 UTC m=+24.897672512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637799 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.637925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.638033 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638141 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638191 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.638270 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:02.13814954 +0000 UTC m=+24.898218925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638323 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638442 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638619 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638625 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638792 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.638963 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.639147 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.639198 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.639390 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.640267 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.636951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.640480 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.640590 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.640620 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.640883 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.641270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.648624 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.648742 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.648813 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.648914 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:02.148898526 +0000 UTC m=+24.908967831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.652456 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.652496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.655155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.655229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.655355 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.655368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.656233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.656431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.656974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.657167 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.658722 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.658878 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.658941 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.658829 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.659229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.660055 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.660837 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.661029 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.661046 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.661056 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.661190 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:02.161083667 +0000 UTC m=+24.921152982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.663435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.663807 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c" exitCode=255 Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.663840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c"} Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.669608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.670086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672384 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672397 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672408 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672418 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672428 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672439 4760 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672449 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672458 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672469 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672479 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672489 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672500 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672512 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672523 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672535 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672545 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672556 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672566 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672576 4760 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672585 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672596 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672608 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672618 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672630 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672640 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672650 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672660 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672671 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672681 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672690 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672701 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672712 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672722 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672731 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672739 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672747 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672755 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672763 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672772 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672781 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672789 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672796 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672804 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672811 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672819 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672827 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672835 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672844 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672853 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672869 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672877 4760 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672887 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672895 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672904 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672912 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672921 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672929 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672936 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672952 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672959 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672967 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672975 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672984 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672991 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.672999 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673007 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673016 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673024 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673032 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673040 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673048 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673056 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673063 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673072 4760 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673079 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673107 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673116 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673124 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673133 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673142 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673150 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673160 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673202 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673217 4760 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673230 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673241 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673252 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673264 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673276 4760 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673288 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673304 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673316 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673328 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673340 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673352 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673363 4760 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673388 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673399 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673411 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673424 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673435 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673449 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673461 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673472 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673485 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673497 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673508 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673527 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673540 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673552 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673564 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673576 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673587 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673601 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673612 4760 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673623 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673635 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673648 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673661 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673672 4760 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673684 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673695 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673706 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673718 4760 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673729 4760 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673740 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673751 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673762 4760 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673773 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673784 4760 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673794 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673805 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673816 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673827 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673840 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673856 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673868 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673880 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673891 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673903 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673915 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673927 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673938 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673950 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673962 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673974 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673985 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.673998 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674009 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674020 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674032 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674043 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674054 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674066 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674077 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674106 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674144 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674156 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674167 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674179 4760 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674191 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674203 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674214 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674226 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674238 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.674251 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.676202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.681996 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.685365 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.687408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.690568 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.700281 4760 scope.go:117] "RemoveContainer" containerID="8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.706656 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.738915 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.760453 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.764185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.771014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.778331 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.778787 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.778871 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.778928 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.784042 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.791377 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:01 crc kubenswrapper[4760]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Dec 27 05:45:01 crc kubenswrapper[4760]: set -o allexport Dec 27 05:45:01 crc kubenswrapper[4760]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 27 05:45:01 crc kubenswrapper[4760]: source /etc/kubernetes/apiserver-url.env Dec 27 05:45:01 crc kubenswrapper[4760]: else Dec 27 05:45:01 crc kubenswrapper[4760]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 27 05:45:01 crc kubenswrapper[4760]: exit 1 Dec 27 05:45:01 crc kubenswrapper[4760]: fi Dec 27 05:45:01 crc kubenswrapper[4760]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 27 05:45:01 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:01 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.792912 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.795539 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.820735 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.821874 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.822988 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-t57k6"] Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.823358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.825452 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:01 crc kubenswrapper[4760]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 27 05:45:01 crc kubenswrapper[4760]: if [[ -f "/env/_master" ]]; then Dec 27 05:45:01 crc kubenswrapper[4760]: set -o allexport Dec 27 05:45:01 crc kubenswrapper[4760]: source "/env/_master" Dec 27 05:45:01 crc kubenswrapper[4760]: set +o allexport Dec 27 05:45:01 crc kubenswrapper[4760]: fi Dec 27 05:45:01 crc kubenswrapper[4760]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 27 05:45:01 crc kubenswrapper[4760]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 27 05:45:01 crc kubenswrapper[4760]: ho_enable="--enable-hybrid-overlay" Dec 27 05:45:01 crc kubenswrapper[4760]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 27 05:45:01 crc kubenswrapper[4760]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 27 05:45:01 crc kubenswrapper[4760]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 27 05:45:01 crc kubenswrapper[4760]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 27 05:45:01 crc kubenswrapper[4760]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 27 05:45:01 crc kubenswrapper[4760]: --webhook-host=127.0.0.1 \ Dec 27 05:45:01 crc kubenswrapper[4760]: --webhook-port=9743 \ Dec 27 05:45:01 crc kubenswrapper[4760]: ${ho_enable} \ Dec 27 05:45:01 crc kubenswrapper[4760]: --enable-interconnect \ Dec 27 05:45:01 crc kubenswrapper[4760]: --disable-approver \ Dec 27 05:45:01 crc kubenswrapper[4760]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 27 05:45:01 crc kubenswrapper[4760]: --wait-for-kubernetes-api=200s \ Dec 27 05:45:01 crc kubenswrapper[4760]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 27 05:45:01 crc kubenswrapper[4760]: --loglevel="${LOGLEVEL}" Dec 27 05:45:01 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:01 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.825733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.825974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.826455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.831001 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:01 crc kubenswrapper[4760]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 27 05:45:01 crc kubenswrapper[4760]: if [[ -f "/env/_master" ]]; then Dec 27 05:45:01 crc kubenswrapper[4760]: set -o allexport Dec 27 05:45:01 crc kubenswrapper[4760]: source "/env/_master" Dec 27 05:45:01 crc kubenswrapper[4760]: set +o allexport Dec 27 05:45:01 crc kubenswrapper[4760]: fi Dec 27 05:45:01 crc kubenswrapper[4760]: Dec 27 05:45:01 crc kubenswrapper[4760]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 27 05:45:01 crc kubenswrapper[4760]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 27 05:45:01 crc kubenswrapper[4760]: --disable-webhook \ Dec 27 05:45:01 crc kubenswrapper[4760]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 27 05:45:01 crc kubenswrapper[4760]: --loglevel="${LOGLEVEL}" Dec 27 05:45:01 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:01 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:01 crc kubenswrapper[4760]: E1227 05:45:01.833318 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.858993 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.880390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67571db3-3f43-4589-bf18-a42b6ea3da12-hosts-file\") pod \"node-resolver-t57k6\" (UID: \"67571db3-3f43-4589-bf18-a42b6ea3da12\") " pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.880434 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksjx\" (UniqueName: \"kubernetes.io/projected/67571db3-3f43-4589-bf18-a42b6ea3da12-kube-api-access-bksjx\") pod \"node-resolver-t57k6\" (UID: \"67571db3-3f43-4589-bf18-a42b6ea3da12\") " pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.892317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.909185 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.948505 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.962910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.971423 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.978925 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.981118 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67571db3-3f43-4589-bf18-a42b6ea3da12-hosts-file\") pod \"node-resolver-t57k6\" (UID: \"67571db3-3f43-4589-bf18-a42b6ea3da12\") " pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.981176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksjx\" (UniqueName: \"kubernetes.io/projected/67571db3-3f43-4589-bf18-a42b6ea3da12-kube-api-access-bksjx\") pod \"node-resolver-t57k6\" (UID: \"67571db3-3f43-4589-bf18-a42b6ea3da12\") " pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.981231 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/67571db3-3f43-4589-bf18-a42b6ea3da12-hosts-file\") pod \"node-resolver-t57k6\" (UID: \"67571db3-3f43-4589-bf18-a42b6ea3da12\") " pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.989185 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:01 crc kubenswrapper[4760]: I1227 05:45:01.996601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksjx\" (UniqueName: \"kubernetes.io/projected/67571db3-3f43-4589-bf18-a42b6ea3da12-kube-api-access-bksjx\") pod \"node-resolver-t57k6\" (UID: \"67571db3-3f43-4589-bf18-a42b6ea3da12\") " pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.137132 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-t57k6" Dec 27 05:45:02 crc kubenswrapper[4760]: W1227 05:45:02.154215 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67571db3_3f43_4589_bf18_a42b6ea3da12.slice/crio-033ad5b606e36bccaf0c212094f19223d9e1267d9a05de5b1d139c8a96a764e8 WatchSource:0}: Error finding container 033ad5b606e36bccaf0c212094f19223d9e1267d9a05de5b1d139c8a96a764e8: Status 404 returned error can't find the container with id 033ad5b606e36bccaf0c212094f19223d9e1267d9a05de5b1d139c8a96a764e8 Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.157369 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:02 crc kubenswrapper[4760]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Dec 27 05:45:02 crc kubenswrapper[4760]: set -uo pipefail Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 27 05:45:02 crc kubenswrapper[4760]: HOSTS_FILE="/etc/hosts" Dec 27 05:45:02 crc kubenswrapper[4760]: TEMP_FILE="/etc/hosts.tmp" Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # Make a temporary file with the old hosts file's attributes. Dec 27 05:45:02 crc kubenswrapper[4760]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 27 05:45:02 crc kubenswrapper[4760]: echo "Failed to preserve hosts file. Exiting." Dec 27 05:45:02 crc kubenswrapper[4760]: exit 1 Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: while true; do Dec 27 05:45:02 crc kubenswrapper[4760]: declare -A svc_ips Dec 27 05:45:02 crc kubenswrapper[4760]: for svc in "${services[@]}"; do Dec 27 05:45:02 crc kubenswrapper[4760]: # Fetch service IP from cluster dns if present. We make several tries Dec 27 05:45:02 crc kubenswrapper[4760]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 27 05:45:02 crc kubenswrapper[4760]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 27 05:45:02 crc kubenswrapper[4760]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 27 05:45:02 crc kubenswrapper[4760]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 27 05:45:02 crc kubenswrapper[4760]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 27 05:45:02 crc kubenswrapper[4760]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 27 05:45:02 crc kubenswrapper[4760]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 27 05:45:02 crc kubenswrapper[4760]: for i in ${!cmds[*]} Dec 27 05:45:02 crc kubenswrapper[4760]: do Dec 27 05:45:02 crc kubenswrapper[4760]: ips=($(eval "${cmds[i]}")) Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: svc_ips["${svc}"]="${ips[@]}" Dec 27 05:45:02 crc kubenswrapper[4760]: break Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # Update /etc/hosts only if we get valid service IPs Dec 27 05:45:02 crc kubenswrapper[4760]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 27 05:45:02 crc kubenswrapper[4760]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 27 05:45:02 crc kubenswrapper[4760]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 27 05:45:02 crc kubenswrapper[4760]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 27 05:45:02 crc kubenswrapper[4760]: sleep 60 & wait Dec 27 05:45:02 crc kubenswrapper[4760]: continue Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # Append resolver entries for services Dec 27 05:45:02 crc kubenswrapper[4760]: rc=0 Dec 27 05:45:02 crc kubenswrapper[4760]: for svc in "${!svc_ips[@]}"; do Dec 27 05:45:02 crc kubenswrapper[4760]: for ip in ${svc_ips[${svc}]}; do Dec 27 05:45:02 crc kubenswrapper[4760]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ $rc -ne 0 ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: sleep 60 & wait Dec 27 05:45:02 crc kubenswrapper[4760]: continue Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 27 05:45:02 crc kubenswrapper[4760]: # Replace /etc/hosts with our modified version if needed Dec 27 05:45:02 crc kubenswrapper[4760]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 27 05:45:02 crc kubenswrapper[4760]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: sleep 60 & wait Dec 27 05:45:02 crc kubenswrapper[4760]: unset svc_ips Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bksjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-t57k6_openshift-dns(67571db3-3f43-4589-bf18-a42b6ea3da12): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:02 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.159336 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-t57k6" podUID="67571db3-3f43-4589-bf18-a42b6ea3da12" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.182895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.183000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.183036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183122 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:03.183074229 +0000 UTC m=+25.943143544 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183152 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.183193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183209 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:03.183193142 +0000 UTC m=+25.943262537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.183233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183231 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183305 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183319 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183349 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:03.183341175 +0000 UTC m=+25.943410490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183272 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183383 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183415 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:03.183403257 +0000 UTC m=+25.943472632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183422 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183434 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.183483 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:03.183467508 +0000 UTC m=+25.943536823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.296890 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.313751 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.336880 4760 csr.go:261] certificate signing request csr-bnfqd is approved, waiting to be issued Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.341903 4760 csr.go:257] certificate signing request csr-bnfqd is issued Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.593199 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xhkgh"] Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.593567 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fmk6w"] Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.593815 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.594356 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.597274 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-r298b"] Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.597774 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600354 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600465 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600598 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600641 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600735 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600820 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600892 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.600997 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.601126 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.601647 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.601883 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.602233 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.609805 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.620925 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.630008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.638376 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.646145 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.653958 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.661152 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.669688 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.671296 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928"} Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.671470 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.672290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t57k6" event={"ID":"67571db3-3f43-4589-bf18-a42b6ea3da12","Type":"ContainerStarted","Data":"033ad5b606e36bccaf0c212094f19223d9e1267d9a05de5b1d139c8a96a764e8"} Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.672989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"db87d055b3d748f62ec489f8a5f2234f4baca755e40a18fb5560992ee9839fcf"} Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.673339 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:02 crc kubenswrapper[4760]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Dec 27 05:45:02 crc kubenswrapper[4760]: set -uo pipefail Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 27 05:45:02 crc kubenswrapper[4760]: HOSTS_FILE="/etc/hosts" Dec 27 05:45:02 crc kubenswrapper[4760]: TEMP_FILE="/etc/hosts.tmp" Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # Make a temporary file with the old hosts file's attributes. Dec 27 05:45:02 crc kubenswrapper[4760]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 27 05:45:02 crc kubenswrapper[4760]: echo "Failed to preserve hosts file. Exiting." Dec 27 05:45:02 crc kubenswrapper[4760]: exit 1 Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: while true; do Dec 27 05:45:02 crc kubenswrapper[4760]: declare -A svc_ips Dec 27 05:45:02 crc kubenswrapper[4760]: for svc in "${services[@]}"; do Dec 27 05:45:02 crc kubenswrapper[4760]: # Fetch service IP from cluster dns if present. We make several tries Dec 27 05:45:02 crc kubenswrapper[4760]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 27 05:45:02 crc kubenswrapper[4760]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 27 05:45:02 crc kubenswrapper[4760]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 27 05:45:02 crc kubenswrapper[4760]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 27 05:45:02 crc kubenswrapper[4760]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 27 05:45:02 crc kubenswrapper[4760]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 27 05:45:02 crc kubenswrapper[4760]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 27 05:45:02 crc kubenswrapper[4760]: for i in ${!cmds[*]} Dec 27 05:45:02 crc kubenswrapper[4760]: do Dec 27 05:45:02 crc kubenswrapper[4760]: ips=($(eval "${cmds[i]}")) Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: svc_ips["${svc}"]="${ips[@]}" Dec 27 05:45:02 crc kubenswrapper[4760]: break Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # Update /etc/hosts only if we get valid service IPs Dec 27 05:45:02 crc kubenswrapper[4760]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 27 05:45:02 crc kubenswrapper[4760]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 27 05:45:02 crc kubenswrapper[4760]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 27 05:45:02 crc kubenswrapper[4760]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 27 05:45:02 crc kubenswrapper[4760]: sleep 60 & wait Dec 27 05:45:02 crc kubenswrapper[4760]: continue Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # Append resolver entries for services Dec 27 05:45:02 crc kubenswrapper[4760]: rc=0 Dec 27 05:45:02 crc kubenswrapper[4760]: for svc in "${!svc_ips[@]}"; do Dec 27 05:45:02 crc kubenswrapper[4760]: for ip in ${svc_ips[${svc}]}; do Dec 27 05:45:02 crc kubenswrapper[4760]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ $rc -ne 0 ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: sleep 60 & wait Dec 27 05:45:02 crc kubenswrapper[4760]: continue Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 27 05:45:02 crc kubenswrapper[4760]: # Replace /etc/hosts with our modified version if needed Dec 27 05:45:02 crc kubenswrapper[4760]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 27 05:45:02 crc kubenswrapper[4760]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: sleep 60 & wait Dec 27 05:45:02 crc kubenswrapper[4760]: unset svc_ips Dec 27 05:45:02 crc kubenswrapper[4760]: done Dec 27 05:45:02 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bksjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-t57k6_openshift-dns(67571db3-3f43-4589-bf18-a42b6ea3da12): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:02 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.673714 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.677465 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-t57k6" podUID="67571db3-3f43-4589-bf18-a42b6ea3da12" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.678417 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cd07715c94d7e868b53e9e4b2eb9ee8233efd8eeb2c3e0e4ddb25273e16c55f8"} Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.678659 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.679279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f43bb3db8159ad2434dad274b80b146f1e98cb95553105cc9bcda457e2f83592"} Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.679895 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.681749 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:02 crc kubenswrapper[4760]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ -f "/env/_master" ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: set -o allexport Dec 27 05:45:02 crc kubenswrapper[4760]: source "/env/_master" Dec 27 05:45:02 crc kubenswrapper[4760]: set +o allexport Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 27 05:45:02 crc kubenswrapper[4760]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 27 05:45:02 crc kubenswrapper[4760]: ho_enable="--enable-hybrid-overlay" Dec 27 05:45:02 crc kubenswrapper[4760]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 27 05:45:02 crc kubenswrapper[4760]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 27 05:45:02 crc kubenswrapper[4760]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 27 05:45:02 crc kubenswrapper[4760]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 27 05:45:02 crc kubenswrapper[4760]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 27 05:45:02 crc kubenswrapper[4760]: --webhook-host=127.0.0.1 \ Dec 27 05:45:02 crc kubenswrapper[4760]: --webhook-port=9743 \ Dec 27 05:45:02 crc kubenswrapper[4760]: ${ho_enable} \ Dec 27 05:45:02 crc kubenswrapper[4760]: --enable-interconnect \ Dec 27 05:45:02 crc kubenswrapper[4760]: --disable-approver \ Dec 27 05:45:02 crc kubenswrapper[4760]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 27 05:45:02 crc kubenswrapper[4760]: --wait-for-kubernetes-api=200s \ Dec 27 05:45:02 crc kubenswrapper[4760]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 27 05:45:02 crc kubenswrapper[4760]: --loglevel="${LOGLEVEL}" Dec 27 05:45:02 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:02 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.682353 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:02 crc kubenswrapper[4760]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Dec 27 05:45:02 crc kubenswrapper[4760]: set -o allexport Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: source /etc/kubernetes/apiserver-url.env Dec 27 05:45:02 crc kubenswrapper[4760]: else Dec 27 05:45:02 crc kubenswrapper[4760]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 27 05:45:02 crc kubenswrapper[4760]: exit 1 Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 27 05:45:02 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:02 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.683637 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.683814 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:02 crc kubenswrapper[4760]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 27 05:45:02 crc kubenswrapper[4760]: if [[ -f "/env/_master" ]]; then Dec 27 05:45:02 crc kubenswrapper[4760]: set -o allexport Dec 27 05:45:02 crc kubenswrapper[4760]: source "/env/_master" Dec 27 05:45:02 crc kubenswrapper[4760]: set +o allexport Dec 27 05:45:02 crc kubenswrapper[4760]: fi Dec 27 05:45:02 crc kubenswrapper[4760]: Dec 27 05:45:02 crc kubenswrapper[4760]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 27 05:45:02 crc kubenswrapper[4760]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 27 05:45:02 crc kubenswrapper[4760]: --disable-webhook \ Dec 27 05:45:02 crc kubenswrapper[4760]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 27 05:45:02 crc kubenswrapper[4760]: --loglevel="${LOGLEVEL}" Dec 27 05:45:02 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:02 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.684942 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-cnibin\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686566 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4817e744-ce93-48b6-8642-f3ae31d2db1b-rootfs\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686606 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-daemon-config\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-cnibin\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jn6r\" (UniqueName: \"kubernetes.io/projected/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-kube-api-access-6jn6r\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbtl\" (UniqueName: \"kubernetes.io/projected/4817e744-ce93-48b6-8642-f3ae31d2db1b-kube-api-access-rqbtl\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686689 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4817e744-ce93-48b6-8642-f3ae31d2db1b-proxy-tls\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-etc-kubernetes\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-cni-multus\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-conf-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-k8s-cni-cncf-io\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-netns\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686813 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4817e744-ce93-48b6-8642-f3ae31d2db1b-mcd-auth-proxy-config\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686844 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-multus-certs\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwdm\" (UniqueName: \"kubernetes.io/projected/068e7548-398e-4313-a68a-fc4dcc88fbc6-kube-api-access-5zwdm\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686889 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-cni-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-cni-binary-copy\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686927 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-os-release\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.686960 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-kubelet\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/068e7548-398e-4313-a68a-fc4dcc88fbc6-cni-binary-copy\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/068e7548-398e-4313-a68a-fc4dcc88fbc6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-socket-dir-parent\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-cni-bin\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-hostroot\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687217 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-system-cni-dir\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-os-release\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.687273 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-system-cni-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.691080 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.700027 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.708991 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.714922 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.730869 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.739976 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.748050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.759532 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.771314 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.782329 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787805 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-cni-binary-copy\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787844 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-cni-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787864 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/068e7548-398e-4313-a68a-fc4dcc88fbc6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-os-release\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787899 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-kubelet\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/068e7548-398e-4313-a68a-fc4dcc88fbc6-cni-binary-copy\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-system-cni-dir\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-os-release\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-socket-dir-parent\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.787988 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-cni-bin\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-hostroot\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-system-cni-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-cnibin\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4817e744-ce93-48b6-8642-f3ae31d2db1b-rootfs\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788109 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-daemon-config\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788123 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-cnibin\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788161 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jn6r\" (UniqueName: \"kubernetes.io/projected/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-kube-api-access-6jn6r\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbtl\" (UniqueName: \"kubernetes.io/projected/4817e744-ce93-48b6-8642-f3ae31d2db1b-kube-api-access-rqbtl\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4817e744-ce93-48b6-8642-f3ae31d2db1b-proxy-tls\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-etc-kubernetes\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-conf-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-cni-multus\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788284 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-k8s-cni-cncf-io\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-netns\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788328 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwdm\" (UniqueName: \"kubernetes.io/projected/068e7548-398e-4313-a68a-fc4dcc88fbc6-kube-api-access-5zwdm\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788342 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4817e744-ce93-48b6-8642-f3ae31d2db1b-mcd-auth-proxy-config\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788356 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-multus-certs\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-multus-certs\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788465 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-os-release\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-kubelet\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-cni-binary-copy\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788709 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-system-cni-dir\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-os-release\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788762 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/068e7548-398e-4313-a68a-fc4dcc88fbc6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788787 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-socket-dir-parent\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-cni-bin\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788842 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-hostroot\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-cni-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-system-cni-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4817e744-ce93-48b6-8642-f3ae31d2db1b-rootfs\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.788987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/068e7548-398e-4313-a68a-fc4dcc88fbc6-cni-binary-copy\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-cnibin\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789234 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-cnibin\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789248 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-var-lib-cni-multus\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789280 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-netns\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-etc-kubernetes\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789643 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-host-run-k8s-cni-cncf-io\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-conf-dir\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.789995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-multus-daemon-config\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.790141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4817e744-ce93-48b6-8642-f3ae31d2db1b-mcd-auth-proxy-config\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.790210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/068e7548-398e-4313-a68a-fc4dcc88fbc6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.797653 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.805622 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4817e744-ce93-48b6-8642-f3ae31d2db1b-proxy-tls\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.813059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwdm\" (UniqueName: \"kubernetes.io/projected/068e7548-398e-4313-a68a-fc4dcc88fbc6-kube-api-access-5zwdm\") pod \"multus-additional-cni-plugins-r298b\" (UID: \"068e7548-398e-4313-a68a-fc4dcc88fbc6\") " pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.814165 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jn6r\" (UniqueName: \"kubernetes.io/projected/b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d-kube-api-access-6jn6r\") pod \"multus-fmk6w\" (UID: \"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\") " pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.815689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbtl\" (UniqueName: \"kubernetes.io/projected/4817e744-ce93-48b6-8642-f3ae31d2db1b-kube-api-access-rqbtl\") pod \"machine-config-daemon-xhkgh\" (UID: \"4817e744-ce93-48b6-8642-f3ae31d2db1b\") " pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.820499 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.863871 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.904631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fmk6w" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.912758 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r298b" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.918505 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:02 crc kubenswrapper[4760]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 27 05:45:02 crc kubenswrapper[4760]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 27 05:45:02 crc kubenswrapper[4760]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jn6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-fmk6w_openshift-multus(b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:02 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.919631 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-fmk6w" podUID="b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.919721 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.925618 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zwdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-r298b_openshift-multus(068e7548-398e-4313-a68a-fc4dcc88fbc6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.927971 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-r298b" podUID="068e7548-398e-4313-a68a-fc4dcc88fbc6" Dec 27 05:45:02 crc kubenswrapper[4760]: W1227 05:45:02.929558 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4817e744_ce93_48b6_8642_f3ae31d2db1b.slice/crio-cafd08498adcbaa8f0644f66239fd75212e9c1edd2256316e3d23590abf532ce WatchSource:0}: Error finding container cafd08498adcbaa8f0644f66239fd75212e9c1edd2256316e3d23590abf532ce: Status 404 returned error can't find the container with id cafd08498adcbaa8f0644f66239fd75212e9c1edd2256316e3d23590abf532ce Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.931559 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqbtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.934454 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqbtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 27 05:45:02 crc kubenswrapper[4760]: E1227 05:45:02.935626 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.982190 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmm9v"] Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.984382 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.987313 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.987619 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.987758 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.987924 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.989290 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.989481 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.990108 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 27 05:45:02 crc kubenswrapper[4760]: I1227 05:45:02.997586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.009733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.018400 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.030162 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.038186 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.050916 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.061878 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.078076 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-node-log\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090470 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090545 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-script-lib\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-netns\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-slash\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-systemd\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090703 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-ovn\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-kubelet\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-etc-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-var-lib-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-env-overrides\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090866 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-ovn-kubernetes\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090891 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-netd\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-config\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090950 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-systemd-units\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.090976 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5xx\" (UniqueName: \"kubernetes.io/projected/7c8aa557-e11a-4c40-9179-22811f44ff18-kube-api-access-8g5xx\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.091017 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c8aa557-e11a-4c40-9179-22811f44ff18-ovn-node-metrics-cert\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.091057 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-log-socket\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.091076 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-bin\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.096884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.108179 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.117031 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.130022 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.191527 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.191776 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:05.191742891 +0000 UTC m=+27.951812206 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.191977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-systemd-units\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192183 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192299 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192318 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-systemd-units\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192267 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5xx\" (UniqueName: \"kubernetes.io/projected/7c8aa557-e11a-4c40-9179-22811f44ff18-kube-api-access-8g5xx\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192380 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:05.192352155 +0000 UTC m=+27.952421560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192486 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c8aa557-e11a-4c40-9179-22811f44ff18-ovn-node-metrics-cert\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-log-socket\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-bin\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-node-log\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192624 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-log-socket\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192680 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192687 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-bin\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192664 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:05.192652993 +0000 UTC m=+27.952722308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-script-lib\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192749 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-netns\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192767 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-slash\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192785 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-systemd\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192805 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-ovn\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192846 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-netns\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-slash\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192833 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-node-log\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-systemd\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192877 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-kubelet\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-kubelet\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.192940 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192973 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-ovn\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.192975 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-etc-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-etc-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-var-lib-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-env-overrides\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.193115 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:05.193061662 +0000 UTC m=+27.953131077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-var-lib-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-ovn-kubernetes\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-ovn-kubernetes\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193205 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193239 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-netd\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-config\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.193358 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.193375 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.193388 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.193422 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:05.19341046 +0000 UTC m=+27.953479795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193441 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-script-lib\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193452 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-openvswitch\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-env-overrides\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.193472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-netd\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.194126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-config\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.197380 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c8aa557-e11a-4c40-9179-22811f44ff18-ovn-node-metrics-cert\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.212293 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5xx\" (UniqueName: \"kubernetes.io/projected/7c8aa557-e11a-4c40-9179-22811f44ff18-kube-api-access-8g5xx\") pod \"ovnkube-node-rmm9v\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.300631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:03 crc kubenswrapper[4760]: W1227 05:45:03.314141 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c8aa557_e11a_4c40_9179_22811f44ff18.slice/crio-b9c9b4e1450c371e7cdf9537f7c5669fd91a557d37989de2aa9b4fadec16c44e WatchSource:0}: Error finding container b9c9b4e1450c371e7cdf9537f7c5669fd91a557d37989de2aa9b4fadec16c44e: Status 404 returned error can't find the container with id b9c9b4e1450c371e7cdf9537f7c5669fd91a557d37989de2aa9b4fadec16c44e Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.316490 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 27 05:45:03 crc kubenswrapper[4760]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 27 05:45:03 crc kubenswrapper[4760]: apiVersion: v1 Dec 27 05:45:03 crc kubenswrapper[4760]: clusters: Dec 27 05:45:03 crc kubenswrapper[4760]: - cluster: Dec 27 05:45:03 crc kubenswrapper[4760]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 27 05:45:03 crc kubenswrapper[4760]: server: https://api-int.crc.testing:6443 Dec 27 05:45:03 crc kubenswrapper[4760]: name: default-cluster Dec 27 05:45:03 crc kubenswrapper[4760]: contexts: Dec 27 05:45:03 crc kubenswrapper[4760]: - context: Dec 27 05:45:03 crc kubenswrapper[4760]: cluster: default-cluster Dec 27 05:45:03 crc kubenswrapper[4760]: namespace: default Dec 27 05:45:03 crc kubenswrapper[4760]: user: default-auth Dec 27 05:45:03 crc kubenswrapper[4760]: name: default-context Dec 27 05:45:03 crc kubenswrapper[4760]: current-context: default-context Dec 27 05:45:03 crc kubenswrapper[4760]: kind: Config Dec 27 05:45:03 crc kubenswrapper[4760]: preferences: {} Dec 27 05:45:03 crc kubenswrapper[4760]: users: Dec 27 05:45:03 crc kubenswrapper[4760]: - name: default-auth Dec 27 05:45:03 crc kubenswrapper[4760]: user: Dec 27 05:45:03 crc kubenswrapper[4760]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 27 05:45:03 crc kubenswrapper[4760]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 27 05:45:03 crc kubenswrapper[4760]: EOF Dec 27 05:45:03 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g5xx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-rmm9v_openshift-ovn-kubernetes(7c8aa557-e11a-4c40-9179-22811f44ff18): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 27 05:45:03 crc kubenswrapper[4760]: > logger="UnhandledError" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.318219 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.342967 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-27 05:40:02 +0000 UTC, rotation deadline is 2026-11-02 10:34:55.000031028 +0000 UTC Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.343116 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7444h49m51.65695042s for next certificate rotation Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.398476 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.501873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.501873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.501998 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.502035 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.502110 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:03 crc kubenswrapper[4760]: E1227 05:45:03.502291 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.509820 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.510369 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.511560 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.512233 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.513136 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.513650 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.514225 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.515139 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.515715 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.516679 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.517169 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.518171 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.518722 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.519465 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.520345 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.520888 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.521956 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.522410 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.522972 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.523944 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.524382 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.525449 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.525906 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.526954 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.527494 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.528157 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.529357 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.529790 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.530690 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.531175 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.532075 4760 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.532190 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.533703 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.534578 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.535002 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.536639 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.537244 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.538043 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.538862 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.539918 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.540396 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.541419 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.542168 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.543115 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.543547 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.544402 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.544867 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.545893 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.546448 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.547253 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.547694 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.548568 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.549108 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.549541 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.688620 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"b9c9b4e1450c371e7cdf9537f7c5669fd91a557d37989de2aa9b4fadec16c44e"} Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.690075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerStarted","Data":"ac5730b8357e5d96203b0eb438da254a6e283ceb792d724cd0f93837482594a7"} Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.691792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmk6w" event={"ID":"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d","Type":"ContainerStarted","Data":"b3c740b55d5f0ccdd838ba002c1db3bee4fc7e58f2370e396a8ae535392836e9"} Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.693854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"cafd08498adcbaa8f0644f66239fd75212e9c1edd2256316e3d23590abf532ce"} Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.707037 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.720593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.732686 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.742713 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.749606 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.758402 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.769780 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.779380 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.792295 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.804476 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.824017 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.835625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.846174 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.863061 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.872513 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.882190 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.892013 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.904769 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.914215 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.923194 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.930183 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.937740 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.947221 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:03 crc kubenswrapper[4760]: I1227 05:45:03.955634 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.698483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737"} Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.698530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79"} Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.700261 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" exitCode=0 Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.700362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.702354 4760 generic.go:334] "Generic (PLEG): container finished" podID="068e7548-398e-4313-a68a-fc4dcc88fbc6" containerID="191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b" exitCode=0 Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.702438 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerDied","Data":"191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b"} Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.706276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmk6w" event={"ID":"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d","Type":"ContainerStarted","Data":"2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162"} Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.726381 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.744944 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.756382 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.766307 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.775193 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.784160 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.795141 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.804167 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.813197 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.834137 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-n82kv"] Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.833980 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.834645 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.836765 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.836872 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.837239 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.837474 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.848059 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.860008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.878138 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.890128 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.901187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.911977 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13840804-f114-41e8-947a-df4f0a6ec1ba-serviceca\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.912020 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13840804-f114-41e8-947a-df4f0a6ec1ba-host\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.912887 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6jq\" (UniqueName: \"kubernetes.io/projected/13840804-f114-41e8-947a-df4f0a6ec1ba-kube-api-access-gs6jq\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.914924 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.924417 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.933255 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.944398 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.952464 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.960945 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.969138 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.976842 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.984303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:04 crc kubenswrapper[4760]: I1227 05:45:04.990074 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.013650 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13840804-f114-41e8-947a-df4f0a6ec1ba-serviceca\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.013683 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13840804-f114-41e8-947a-df4f0a6ec1ba-host\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.013710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs6jq\" (UniqueName: \"kubernetes.io/projected/13840804-f114-41e8-947a-df4f0a6ec1ba-kube-api-access-gs6jq\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.013807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13840804-f114-41e8-947a-df4f0a6ec1ba-host\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.015151 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13840804-f114-41e8-947a-df4f0a6ec1ba-serviceca\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.030593 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs6jq\" (UniqueName: \"kubernetes.io/projected/13840804-f114-41e8-947a-df4f0a6ec1ba-kube-api-access-gs6jq\") pod \"node-ca-n82kv\" (UID: \"13840804-f114-41e8-947a-df4f0a6ec1ba\") " pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.215708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.215866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.215915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:09.215888747 +0000 UTC m=+31.975958062 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.215958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216014 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216041 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216059 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216067 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216118 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:09.216105852 +0000 UTC m=+31.976175167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216151 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:09.216125473 +0000 UTC m=+31.976194878 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.216016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.216248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216355 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216424 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:09.216409219 +0000 UTC m=+31.976478634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216467 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216532 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216546 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.216614 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:09.216594744 +0000 UTC m=+31.976664069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.426626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n82kv" Dec 27 05:45:05 crc kubenswrapper[4760]: W1227 05:45:05.444297 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13840804_f114_41e8_947a_df4f0a6ec1ba.slice/crio-79a0316dd3f87c30e88deb659c45ecdf13dd0f4b605c009dbe6b5e50cc45f7e6 WatchSource:0}: Error finding container 79a0316dd3f87c30e88deb659c45ecdf13dd0f4b605c009dbe6b5e50cc45f7e6: Status 404 returned error can't find the container with id 79a0316dd3f87c30e88deb659c45ecdf13dd0f4b605c009dbe6b5e50cc45f7e6 Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.501767 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.501826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.502307 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.502459 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.501955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:05 crc kubenswrapper[4760]: E1227 05:45:05.502581 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.710678 4760 generic.go:334] "Generic (PLEG): container finished" podID="068e7548-398e-4313-a68a-fc4dcc88fbc6" containerID="3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5" exitCode=0 Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.710748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerDied","Data":"3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5"} Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.712201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n82kv" event={"ID":"13840804-f114-41e8-947a-df4f0a6ec1ba","Type":"ContainerStarted","Data":"79a0316dd3f87c30e88deb659c45ecdf13dd0f4b605c009dbe6b5e50cc45f7e6"} Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.720483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.720542 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.720555 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.720569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.721254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.727758 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.735273 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.744271 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.753453 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.765975 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.791935 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.802108 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.811601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.827020 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.835965 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.845519 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:05 crc kubenswrapper[4760]: I1227 05:45:05.855383 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.726639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n82kv" event={"ID":"13840804-f114-41e8-947a-df4f0a6ec1ba","Type":"ContainerStarted","Data":"b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112"} Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.731789 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.731873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.734840 4760 generic.go:334] "Generic (PLEG): container finished" podID="068e7548-398e-4313-a68a-fc4dcc88fbc6" containerID="2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143" exitCode=0 Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.734894 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerDied","Data":"2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143"} Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.740627 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.752359 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.764516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.785879 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.800189 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.810939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.821851 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.828519 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.836843 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.847807 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.858629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.869849 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.878557 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.889594 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.902174 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.913497 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.922929 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.935963 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.947478 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.961135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.980496 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:06 crc kubenswrapper[4760]: I1227 05:45:06.994783 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.007303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.021580 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.030772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.047869 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.289760 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.296668 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.302229 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.313299 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.325525 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.335309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.345416 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.354605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.365775 4760 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.365854 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.395426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.408622 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.428253 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.438727 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.452126 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.468815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.479185 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.497849 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.501830 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.501839 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.501901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:07 crc kubenswrapper[4760]: E1227 05:45:07.501935 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:07 crc kubenswrapper[4760]: E1227 05:45:07.502016 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:07 crc kubenswrapper[4760]: E1227 05:45:07.502107 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.510383 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.527050 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.542790 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.555994 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.565598 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.576254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.598714 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.613586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.629831 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.645742 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.675488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.692292 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.707596 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.725246 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.740330 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.755536 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.773801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.791903 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.802736 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.822378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.832085 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.843548 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.857315 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.873902 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.889316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.902231 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:07 crc kubenswrapper[4760]: I1227 05:45:07.917913 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.147700 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.150330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.150374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.150392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.150481 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.163211 4760 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.163783 4760 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.165497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.165585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.165615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.165650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.165677 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: E1227 05:45:08.189275 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.194155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.194234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.194254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.194284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.194303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: E1227 05:45:08.209667 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.215315 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.215360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.215372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.215389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.215400 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: E1227 05:45:08.229221 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.234472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.234625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.234645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.234676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.234696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: E1227 05:45:08.250472 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.256804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.256861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.256876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.256901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.256918 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: E1227 05:45:08.273477 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:08 crc kubenswrapper[4760]: E1227 05:45:08.274187 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.279882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.279919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.279932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.279956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.279972 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.381653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.381693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.381704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.381722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.381734 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.483711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.484036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.484260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.484385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.484517 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.588029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.588291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.588393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.588525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.588624 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.692227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.692287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.692305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.692333 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.692351 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.795183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.795840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.795872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.795901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.795926 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.899007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.899036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.899045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.899058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:08 crc kubenswrapper[4760]: I1227 05:45:08.899067 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:08Z","lastTransitionTime":"2025-12-27T05:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.002039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.002147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.002173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.002206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.002229 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.105139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.105181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.105189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.105206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.105216 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.208484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.208525 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.208539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.208558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.208573 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.261454 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.261614 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.261682 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261747 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:17.26170834 +0000 UTC m=+40.021777685 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.261808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261876 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.261884 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261876 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261959 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261980 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261902 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.262031 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.261983 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:17.261967736 +0000 UTC m=+40.022037081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.262130 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:17.26208 +0000 UTC m=+40.022149345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.262015 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.262156 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:17.262142771 +0000 UTC m=+40.022212126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.262175 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.262280 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:17.262246674 +0000 UTC m=+40.022316029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.311601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.311674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.311697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.311728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.311750 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.415695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.415771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.415791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.415823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.415843 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.502417 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.502468 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.502497 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.502686 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.503046 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:09 crc kubenswrapper[4760]: E1227 05:45:09.502928 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.519667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.519712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.519734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.519759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.519780 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.622904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.622959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.622968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.622987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.622998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.743936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.744248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.744955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.745032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.745051 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.750562 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerStarted","Data":"c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.756321 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.848913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.848958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.848969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.848989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.849002 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.952649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.952731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.952750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.952779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:09 crc kubenswrapper[4760]: I1227 05:45:09.952796 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:09Z","lastTransitionTime":"2025-12-27T05:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.056193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.056235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.056247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.056265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.056278 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.159481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.159542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.159560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.159583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.159601 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.262326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.262361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.262370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.262387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.262397 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.364922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.364970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.364984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.365003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.365015 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.468017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.468162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.468181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.468228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.468247 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.570473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.570519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.570535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.570559 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.570578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.672948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.672998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.673015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.673053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.673070 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.772977 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.775567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.775596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.775609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.775625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.775638 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.782961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.791650 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.797909 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.808884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.821796 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.832303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.852361 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.863222 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.872708 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.877187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.877223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.877233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.877248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.877258 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.885175 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.891273 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.898461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.907728 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.980894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.980936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.980947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.980964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:10 crc kubenswrapper[4760]: I1227 05:45:10.980974 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:10Z","lastTransitionTime":"2025-12-27T05:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.084217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.085164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.085371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.085485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.085581 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.188534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.188602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.188619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.188646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.188670 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.291976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.292036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.292054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.292083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.292129 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.394259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.394324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.394349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.394378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.394396 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.497154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.497420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.497530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.497667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.497751 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.502644 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:11 crc kubenswrapper[4760]: E1227 05:45:11.502767 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.502855 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:11 crc kubenswrapper[4760]: E1227 05:45:11.502932 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.503716 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:11 crc kubenswrapper[4760]: E1227 05:45:11.503799 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.601854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.601909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.601922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.601945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.601961 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.705504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.705721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.705854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.705957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.706048 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.770712 4760 generic.go:334] "Generic (PLEG): container finished" podID="068e7548-398e-4313-a68a-fc4dcc88fbc6" containerID="c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4" exitCode=0 Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.770829 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerDied","Data":"c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.786272 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.801146 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.810636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.810701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.810720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.810745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.810762 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.818252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.829665 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.839891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.852914 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.862067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.876011 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.876919 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.887390 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.898970 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.910254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.913049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.913134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.913154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.913182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.913199 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:11Z","lastTransitionTime":"2025-12-27T05:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.916860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.925240 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.934259 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.948316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.962574 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.975148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:11 crc kubenswrapper[4760]: I1227 05:45:11.986121 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.011380 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.016385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.016439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.016456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.016480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.016499 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.027608 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.040711 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.058039 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.070405 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.087450 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.104703 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.192864 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.196050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.196289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.196310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.196337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.196355 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.208966 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.219822 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.299579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.299652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.299663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.299684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.299697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.402558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.402617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.402637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.402665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.402685 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.506027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.506124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.506275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.506358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.506378 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.609905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.609977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.609999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.610024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.610042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.712747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.712781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.712792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.712834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.712849 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.815302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.815535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.815552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.815576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.815595 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.918005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.918067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.918079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.918110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:12 crc kubenswrapper[4760]: I1227 05:45:12.918121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:12Z","lastTransitionTime":"2025-12-27T05:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.020390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.020426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.020436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.020455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.020466 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.124079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.124177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.124199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.124226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.124245 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.226976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.227016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.227028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.227045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.227060 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.330567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.330624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.330648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.330678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.330700 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.433926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.433976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.433992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.434016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.434034 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.502281 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.502639 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.502701 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:13 crc kubenswrapper[4760]: E1227 05:45:13.503029 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:13 crc kubenswrapper[4760]: E1227 05:45:13.503251 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:13 crc kubenswrapper[4760]: E1227 05:45:13.503636 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.536911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.536954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.536965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.536985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.536998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.640273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.640329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.640348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.640378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.640395 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.745215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.745265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.745277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.745297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.745310 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.848076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.848744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.848769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.848801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.848832 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.952254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.952309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.952325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.952349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:13 crc kubenswrapper[4760]: I1227 05:45:13.952368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:13Z","lastTransitionTime":"2025-12-27T05:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.055721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.055798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.055815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.055921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.055941 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.163642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.163786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.164632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.164689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.164714 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.268205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.268267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.268284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.268306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.268323 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.321025 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m"] Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.321699 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.323467 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.326578 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.332383 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.346947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.358536 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.372429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.372454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.372462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.372477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.372486 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.375316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.389012 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.404109 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.413744 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.413829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.413886 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxsvq\" (UniqueName: \"kubernetes.io/projected/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-kube-api-access-xxsvq\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.413990 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.416706 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.436079 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.444226 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.453908 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.469939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.482036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.482129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.482149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.482173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.482190 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.489153 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.497625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.512812 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.514778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.514979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.515121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.515164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxsvq\" (UniqueName: \"kubernetes.io/projected/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-kube-api-access-xxsvq\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.515629 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.515919 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.524472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.530385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.538664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxsvq\" (UniqueName: \"kubernetes.io/projected/e9eac231-cf7e-4b8a-88c6-45cf768b3bad-kube-api-access-xxsvq\") pod \"ovnkube-control-plane-749d76644c-7cs9m\" (UID: \"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.583767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.583831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.583848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.583872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.583891 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.642524 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" Dec 27 05:45:14 crc kubenswrapper[4760]: W1227 05:45:14.663459 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9eac231_cf7e_4b8a_88c6_45cf768b3bad.slice/crio-baed97c4bdde125f49d16354f9e7bf23fe00978eee59f47186b95e2d1dd1be30 WatchSource:0}: Error finding container baed97c4bdde125f49d16354f9e7bf23fe00978eee59f47186b95e2d1dd1be30: Status 404 returned error can't find the container with id baed97c4bdde125f49d16354f9e7bf23fe00978eee59f47186b95e2d1dd1be30 Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.687315 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.687838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.688062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.688351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.688590 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.785846 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.788878 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerStarted","Data":"99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.789910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" event={"ID":"e9eac231-cf7e-4b8a-88c6-45cf768b3bad","Type":"ContainerStarted","Data":"baed97c4bdde125f49d16354f9e7bf23fe00978eee59f47186b95e2d1dd1be30"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.790401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.790435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.790447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.790466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.790481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.893790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.894314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.894343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.894376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.894395 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.997343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.997393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.997410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.997436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:14 crc kubenswrapper[4760]: I1227 05:45:14.997453 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:14Z","lastTransitionTime":"2025-12-27T05:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.100422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.100477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.100494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.100517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.100535 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.203348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.203438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.203647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.203674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.203962 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.309016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.309121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.309153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.309186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.309217 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.413835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.413903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.413921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.413949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.413971 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.502571 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:15 crc kubenswrapper[4760]: E1227 05:45:15.502910 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.502574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:15 crc kubenswrapper[4760]: E1227 05:45:15.503536 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.503280 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:15 crc kubenswrapper[4760]: E1227 05:45:15.503801 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.516419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.516475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.516493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.516515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.516531 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.618962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.619005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.619017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.619036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.619048 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.721517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.721588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.721607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.721633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.721650 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.824835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.824900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.824923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.824954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.824975 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.931341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.931403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.931421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.931447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:15 crc kubenswrapper[4760]: I1227 05:45:15.931465 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:15Z","lastTransitionTime":"2025-12-27T05:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.034371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.034429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.034447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.034472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.034489 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.137895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.137972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.137996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.138028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.138066 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.140822 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bxmb9"] Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.141636 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: E1227 05:45:16.141755 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.158488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.177301 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.193135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.205867 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.232268 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.232341 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gjv\" (UniqueName: \"kubernetes.io/projected/d7ad49d7-d1ed-4414-8a10-778d020e1da5-kube-api-access-w6gjv\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.235989 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.241288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.241347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.241365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.241393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.241412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.253177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.266244 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.309171 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.327141 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.333677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.333736 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gjv\" (UniqueName: \"kubernetes.io/projected/d7ad49d7-d1ed-4414-8a10-778d020e1da5-kube-api-access-w6gjv\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: E1227 05:45:16.333888 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:16 crc kubenswrapper[4760]: E1227 05:45:16.333972 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs podName:d7ad49d7-d1ed-4414-8a10-778d020e1da5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:16.833949 +0000 UTC m=+39.594018325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs") pod "network-metrics-daemon-bxmb9" (UID: "d7ad49d7-d1ed-4414-8a10-778d020e1da5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.344380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.344418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.344431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.344447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.344460 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.347349 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.357631 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.369606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gjv\" (UniqueName: \"kubernetes.io/projected/d7ad49d7-d1ed-4414-8a10-778d020e1da5-kube-api-access-w6gjv\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.371928 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.389494 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.414211 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.432321 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.447689 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.449229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.449293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.449305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.449333 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.449347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.552781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.552867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.552894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.552928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.552953 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.656659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.656708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.656724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.656747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.656763 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.759834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.759880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.759889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.759905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.759914 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.802076 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-t57k6" event={"ID":"67571db3-3f43-4589-bf18-a42b6ea3da12","Type":"ContainerStarted","Data":"0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.802660 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.814647 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.831947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.838007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:16 crc kubenswrapper[4760]: E1227 05:45:16.838450 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:16 crc kubenswrapper[4760]: E1227 05:45:16.838776 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs podName:d7ad49d7-d1ed-4414-8a10-778d020e1da5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:17.838744652 +0000 UTC m=+40.598813987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs") pod "network-metrics-daemon-bxmb9" (UID: "d7ad49d7-d1ed-4414-8a10-778d020e1da5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.848476 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.862318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.862385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.862411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.862444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.862466 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.867367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.882519 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.894421 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.907213 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.921981 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.936898 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.937698 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.950437 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.965157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.965206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.965221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.965244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.965263 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:16Z","lastTransitionTime":"2025-12-27T05:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.975316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.984261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:16 crc kubenswrapper[4760]: I1227 05:45:16.997776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.012256 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.025327 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.037896 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.060882 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.068009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.068039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.068047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.068061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.068070 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.082457 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.093723 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.103375 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.120415 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.128668 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.143678 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.158228 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.168182 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.170940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.170996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.171009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.171026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.171039 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.180555 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.189765 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.202695 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.212605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.228478 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.243324 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.256772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.274116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.274160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.274174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.274193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.274208 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.341845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.341963 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342121 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:33.342071008 +0000 UTC m=+56.102140333 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.342153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.342214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342352 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342344 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342386 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342426 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.342244 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342426 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342486 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:33.342450558 +0000 UTC m=+56.102519933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342493 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342528 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:33.34250898 +0000 UTC m=+56.102578435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342375 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342562 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:33.34254845 +0000 UTC m=+56.102617775 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342577 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.342637 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:33.342619022 +0000 UTC m=+56.102688377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.377564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.377625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.377644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.377670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.377687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.480527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.480638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.480657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.480685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.480702 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.504346 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.504462 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.504658 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.504720 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.504880 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.505041 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.504795 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.505297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.519550 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.531691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.546285 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.560327 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.574417 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.582754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.582783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.582812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.582826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.582834 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.584798 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.612875 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.622779 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.633896 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.645416 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.656351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.669733 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.679813 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.685395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.685456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.685472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.685494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.685510 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.691309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.701525 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.711586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.788643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.788707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.788729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.788757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.788777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.810509 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" event={"ID":"e9eac231-cf7e-4b8a-88c6-45cf768b3bad","Type":"ContainerStarted","Data":"b27c5e42d64ff0b483e5fc6e0da72ec9b58bb56b36fe0bab6bfd1c4fa0ec6fe9"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.815533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerDied","Data":"99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.816308 4760 generic.go:334] "Generic (PLEG): container finished" podID="068e7548-398e-4313-a68a-fc4dcc88fbc6" containerID="99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98" exitCode=0 Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.816675 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.818316 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.834794 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.848330 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.848465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.849019 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: E1227 05:45:17.849081 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs podName:d7ad49d7-d1ed-4414-8a10-778d020e1da5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:19.849064534 +0000 UTC m=+42.609133849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs") pod "network-metrics-daemon-bxmb9" (UID: "d7ad49d7-d1ed-4414-8a10-778d020e1da5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.857622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.861428 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.889252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.897378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.897442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.897456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.897473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.897486 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.905955 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.917275 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.933499 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.945577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.964498 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.975223 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.984316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.996010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.999360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.999394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.999402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.999424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:17 crc kubenswrapper[4760]: I1227 05:45:17.999434 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:17Z","lastTransitionTime":"2025-12-27T05:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.010536 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.022345 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.030420 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.038141 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.048519 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.061884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.074218 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.083662 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.091848 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.098500 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.104638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.104679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.104692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.104711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.104724 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.108234 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.116395 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.128082 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.143844 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.157747 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.180312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.196401 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.207151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.207301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.207361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.207420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.207503 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.207971 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.222305 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.229571 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.310006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.310045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.310063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.310086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.310139 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.325836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.325947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.325970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.325997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.326013 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: E1227 05:45:18.343526 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.347771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.347927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.348012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.348148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.348259 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: E1227 05:45:18.358584 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.363638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.363826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.363952 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.364072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.364244 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: E1227 05:45:18.378686 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.382247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.382425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.382539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.382653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.382772 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: E1227 05:45:18.391872 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.396205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.396266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.396280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.396299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.396316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: E1227 05:45:18.416716 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 27 05:45:18 crc kubenswrapper[4760]: E1227 05:45:18.416957 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.419641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.419701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.419722 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.419748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.419767 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.522568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.522645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.522664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.522689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.522705 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.625781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.625877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.625896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.625920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.625938 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.728959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.729022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.729040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.729066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.729083 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.820291 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.832189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.832284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.832307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.832337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.832363 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.936280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.936370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.936405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.936441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:18 crc kubenswrapper[4760]: I1227 05:45:18.936467 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:18Z","lastTransitionTime":"2025-12-27T05:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.039456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.039516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.039533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.039558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.039581 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.142348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.142373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.142382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.142397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.142406 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.245251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.245314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.245332 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.245356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.245373 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.348841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.348873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.348883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.348897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.348907 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.451608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.451644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.451652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.451665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.451672 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.502292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.502373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.502304 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:19 crc kubenswrapper[4760]: E1227 05:45:19.502425 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.502560 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:19 crc kubenswrapper[4760]: E1227 05:45:19.502572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:19 crc kubenswrapper[4760]: E1227 05:45:19.502632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:19 crc kubenswrapper[4760]: E1227 05:45:19.502772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.554752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.554804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.554823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.554848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.554866 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.658457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.658515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.658532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.658560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.658579 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.762561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.762631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.762651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.762680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.762699 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.823851 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.865920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.866353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.866544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.866697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.866822 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.872788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:19 crc kubenswrapper[4760]: E1227 05:45:19.872988 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:19 crc kubenswrapper[4760]: E1227 05:45:19.873164 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs podName:d7ad49d7-d1ed-4414-8a10-778d020e1da5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:23.873135928 +0000 UTC m=+46.633205273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs") pod "network-metrics-daemon-bxmb9" (UID: "d7ad49d7-d1ed-4414-8a10-778d020e1da5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.969410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.969475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.969495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.969523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:19 crc kubenswrapper[4760]: I1227 05:45:19.969542 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:19Z","lastTransitionTime":"2025-12-27T05:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.072983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.073034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.073050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.073074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.073134 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.176787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.176848 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.176858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.176882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.176895 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.280478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.280579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.280599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.280659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.280690 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.383698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.383756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.383779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.383803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.383816 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.486727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.486791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.486809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.486838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.486855 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.590821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.590881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.590894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.590916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.590928 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.694615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.694673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.694691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.694720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.694738 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.798083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.798135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.798144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.798158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.798168 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.901575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.901614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.901629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.901644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:20 crc kubenswrapper[4760]: I1227 05:45:20.901656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:20Z","lastTransitionTime":"2025-12-27T05:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.005174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.005240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.005258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.005284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.005303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.107922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.107998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.108018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.108042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.108062 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.211651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.211719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.211741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.211769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.211790 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.315322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.315373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.315389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.315413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.315430 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.418006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.418054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.418066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.418123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.418137 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.501578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.501639 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.501644 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:21 crc kubenswrapper[4760]: E1227 05:45:21.501722 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.501578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:21 crc kubenswrapper[4760]: E1227 05:45:21.501878 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:21 crc kubenswrapper[4760]: E1227 05:45:21.501979 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:21 crc kubenswrapper[4760]: E1227 05:45:21.502073 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.520393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.520451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.520461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.520479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.520491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.624203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.624258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.624274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.624289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.624298 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.726925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.726993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.727011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.727129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.727152 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.830266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.830315 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.830325 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.830361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.830371 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.834743 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"478b5920679d399240a9d765a45cfbadda8fe5fa81b865770d5ec3a310c1b66b"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.836411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6e723660fe5b8e050d401e6526b8fd44e8bf3a2ba7c4e63e6dc46c7a43e09a48"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.837893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9ec89edf32a45551421ff7f5ac22a7ef0bcd1e9ef943f0fcd28235202ee5c4e1"} Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.939339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.939399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.939414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.939446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:21 crc kubenswrapper[4760]: I1227 05:45:21.939462 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:21Z","lastTransitionTime":"2025-12-27T05:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.041604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.041743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.041769 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.041807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.041824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.144880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.144930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.144944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.144963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.144976 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.248357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.248421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.248441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.248468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.248486 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.350921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.350977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.350994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.351027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.351042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.453522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.453591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.453607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.453632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.453649 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.557420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.557513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.557532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.557557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.557577 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.660112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.660154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.660167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.660184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.660197 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.763490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.763536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.763549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.763566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.763588 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.865698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.865760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.865778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.865802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.865820 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.968775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.968839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.968858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.968890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:22 crc kubenswrapper[4760]: I1227 05:45:22.968908 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:22Z","lastTransitionTime":"2025-12-27T05:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.071424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.071472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.071488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.071510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.071531 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.173946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.174006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.174030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.174153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.174234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.277626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.277682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.277698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.277720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.277735 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.381257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.381317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.381334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.381372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.381408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.483963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.484076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.484128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.484156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.484178 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.501881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.501943 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.501950 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:23 crc kubenswrapper[4760]: E1227 05:45:23.502129 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.502143 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:23 crc kubenswrapper[4760]: E1227 05:45:23.502298 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:23 crc kubenswrapper[4760]: E1227 05:45:23.502362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:23 crc kubenswrapper[4760]: E1227 05:45:23.502477 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.587000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.587057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.587142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.587170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.587189 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.692028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.692142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.692164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.692194 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.692213 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.795737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.795778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.795790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.795807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.795820 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.898810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.898873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.898920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.898953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.899008 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:23Z","lastTransitionTime":"2025-12-27T05:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:23 crc kubenswrapper[4760]: I1227 05:45:23.921236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:23 crc kubenswrapper[4760]: E1227 05:45:23.921403 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:23 crc kubenswrapper[4760]: E1227 05:45:23.921470 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs podName:d7ad49d7-d1ed-4414-8a10-778d020e1da5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:31.92145246 +0000 UTC m=+54.681521785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs") pod "network-metrics-daemon-bxmb9" (UID: "d7ad49d7-d1ed-4414-8a10-778d020e1da5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.002211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.002276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.002295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.002323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.002341 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.105278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.105358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.105379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.105403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.105423 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.209374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.209457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.209482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.209514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.209536 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.313292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.313353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.313369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.313393 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.313411 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.416278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.416341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.416354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.416373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.416385 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.519595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.519654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.519668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.519685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.519700 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.623441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.623527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.623553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.623595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.623621 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.726477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.726529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.726540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.726558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.726569 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.830082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.830206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.830226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.830318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.830340 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.933967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.934026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.934042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.934068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:24 crc kubenswrapper[4760]: I1227 05:45:24.934084 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:24Z","lastTransitionTime":"2025-12-27T05:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.037028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.037077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.037115 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.037133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.037145 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.140816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.140881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.140900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.140925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.140944 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.243734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.243784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.243796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.243818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.243832 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.346892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.346945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.346958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.346977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.346991 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.450906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.450984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.451007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.451038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.451059 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.502657 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.502699 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.502754 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.502848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:25 crc kubenswrapper[4760]: E1227 05:45:25.503362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:25 crc kubenswrapper[4760]: E1227 05:45:25.503500 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:25 crc kubenswrapper[4760]: E1227 05:45:25.503651 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:25 crc kubenswrapper[4760]: E1227 05:45:25.503293 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.555561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.555625 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.555643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.555668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.555684 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.659233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.659301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.659318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.659346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.659365 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.763132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.763233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.763255 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.763283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.763310 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.866296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.866415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.866445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.866480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.866504 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.969914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.969957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.969970 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.969987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:25 crc kubenswrapper[4760]: I1227 05:45:25.969998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:25Z","lastTransitionTime":"2025-12-27T05:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.072360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.072426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.072449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.072478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.072502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.176620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.176697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.176720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.176751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.176773 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.279274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.279324 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.279350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.279366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.279376 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.381867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.381929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.381941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.381960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.381975 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.486042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.486164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.486184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.486210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.486227 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.588832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.588917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.588936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.588966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.588990 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.637769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.638044 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.665387 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" probeResult="failure" output="" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.686972 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" probeResult="failure" output="" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.691939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.691978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.691995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.692020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.692039 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.800366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.800441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.800467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.800500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.800525 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.903778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.903867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.903893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.903923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:26 crc kubenswrapper[4760]: I1227 05:45:26.903945 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:26Z","lastTransitionTime":"2025-12-27T05:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.006252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.006309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.006329 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.006351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.006368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.109557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.109839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.109851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.109868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.109878 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.212798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.212868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.212886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.212911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.212928 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.316285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.316354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.316370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.316395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.316411 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.419439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.419496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.419514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.419539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.419555 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.501901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.501948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:27 crc kubenswrapper[4760]: E1227 05:45:27.502208 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.502234 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.502325 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:27 crc kubenswrapper[4760]: E1227 05:45:27.502483 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:27 crc kubenswrapper[4760]: E1227 05:45:27.502559 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:27 crc kubenswrapper[4760]: E1227 05:45:27.502616 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.519619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.521576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.521610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.521619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.521635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.521645 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.531997 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.549869 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.563996 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.580997 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.594741 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.611819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.625207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.625278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.625298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.625323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.625343 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.630449 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.634635 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.644000 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.649467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.676121 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.689084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.700608 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.718868 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.728192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.728254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.728280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.728327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.728347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.739397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.759447 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.771725 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.785809 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.806229 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.817425 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.830285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.830344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.830361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.830385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.830400 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.831085 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.842950 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a5c17d-dc36-4bc6-97c3-70723f87e17d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ba024cf540fd521be6ad22ee118d4fcdc7e38642d2eac86bb0d78aec3b6658e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7880e77761692416d9d8d5482860e21c594b83f6627009f83939b576998d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74de918429279895f641a9427303c2cdc7fcf4b2c010a4544e6fbe784ba9b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.853897 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.862029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" event={"ID":"e9eac231-cf7e-4b8a-88c6-45cf768b3bad","Type":"ContainerStarted","Data":"1a6f502c731120560d08c5ddfe10597ad2127a6ce4ea69fca5b7e7ed51bf9f99"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.865666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerStarted","Data":"8350773c1b4a29d263540e1fc02ade222486489acb0a6ed205bdebcdbed979a5"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.867442 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.878767 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.894399 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.908950 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.919653 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.932234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.932280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.932292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.932310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.932322 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:27Z","lastTransitionTime":"2025-12-27T05:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.941172 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.954975 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.965378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.979319 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:27 crc kubenswrapper[4760]: I1227 05:45:27.993385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:27Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.007343 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.019041 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.034687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.034721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.034733 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.034749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.034761 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.034766 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a5c17d-dc36-4bc6-97c3-70723f87e17d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ba024cf540fd521be6ad22ee118d4fcdc7e38642d2eac86bb0d78aec3b6658e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7880e77761692416d9d8d5482860e21c594b83f6627009f83939b576998d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74de918429279895f641a9427303c2cdc7fcf4b2c010a4544e6fbe784ba9b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.050311 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.062772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478b5920679d399240a9d765a45cfbadda8fe5fa81b865770d5ec3a310c1b66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.071704 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.085042 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.095717 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.105618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.118242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.137294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.137338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.137350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.137368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.137381 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.139258 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.149940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27c5e42d64ff0b483e5fc6e0da72ec9b58bb56b36fe0bab6bfd1c4fa0ec6fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a6f502c731120560d08c5ddfe10597ad2127a6ce4ea69fca5b7e7ed51bf9f99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.157490 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.169789 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ec89edf32a45551421ff7f5ac22a7ef0bcd1e9ef943f0fcd28235202ee5c4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.181049 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.205223 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8350773c1b4a29d263540e1fc02ade222486489acb0a6ed205bdebcdbed979a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.217338 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.229016 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.239795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.239832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.239843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.239861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.239874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.343541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.343610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.343630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.343657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.343676 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.445187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.445231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.445242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.445258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.445272 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.547179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.547227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.547235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.547248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.547258 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.650277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.650335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.650352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.650377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.650395 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.752735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.752863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.752882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.752906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.752922 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.754583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.754640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.754656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.754682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.754698 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: E1227 05:45:28.774465 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.780049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.780160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.780187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.780219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.780242 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: E1227 05:45:28.799880 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.804868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.804925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.804944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.804968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.804988 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: E1227 05:45:28.824463 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.828572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.828598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.828606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.828620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.828630 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: E1227 05:45:28.843031 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.847002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.847064 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.847126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.847161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.847182 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: E1227 05:45:28.867670 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cab3fba8-5ed3-434a-8f84-cd17705f5a67\\\",\\\"systemUUID\\\":\\\"a18cf57b-5a1d-4f23-a965-e04c5441f26a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: E1227 05:45:28.868335 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.870631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.870812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.870935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.871136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.871279 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.877030 4760 generic.go:334] "Generic (PLEG): container finished" podID="068e7548-398e-4313-a68a-fc4dcc88fbc6" containerID="8350773c1b4a29d263540e1fc02ade222486489acb0a6ed205bdebcdbed979a5" exitCode=0 Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.877140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerDied","Data":"8350773c1b4a29d263540e1fc02ade222486489acb0a6ed205bdebcdbed979a5"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.897181 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.913448 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.927047 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478b5920679d399240a9d765a45cfbadda8fe5fa81b865770d5ec3a310c1b66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.940312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.952144 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27c5e42d64ff0b483e5fc6e0da72ec9b58bb56b36fe0bab6bfd1c4fa0ec6fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a6f502c731120560d08c5ddfe10597ad2127a6ce4ea69fca5b7e7ed51bf9f99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.965238 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ad49d7-d1ed-4414-8a10-778d020e1da5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6gjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bxmb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.974712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.974803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.974825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.974849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.974865 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:28Z","lastTransitionTime":"2025-12-27T05:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:28 crc kubenswrapper[4760]: I1227 05:45:28.984909 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ec89edf32a45551421ff7f5ac22a7ef0bcd1e9ef943f0fcd28235202ee5c4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:28.999983 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:28Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.015525 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.037288 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.053478 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0d712cd-5e1e-4256-8df7-8ab34945b2aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1227 05:44:56.109083 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1227 05:44:56.110891 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1394109768/tls.crt::/tmp/serving-cert-1394109768/tls.key\\\\\\\"\\\\nI1227 05:45:01.538440 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1227 05:45:01.540856 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1227 05:45:01.540890 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1227 05:45:01.540912 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1227 05:45:01.540918 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1227 05:45:01.545310 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1227 05:45:01.545331 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1227 05:45:01.545342 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545349 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1227 05:45:01.545355 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1227 05:45:01.545358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1227 05:45:01.545364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1227 05:45:01.545368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1227 05:45:01.546773 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.069788 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmk6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jn6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmk6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.077702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.077923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.078131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.078289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.078411 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.090325 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"068e7548-398e-4313-a68a-fc4dcc88fbc6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191fc1eb447cd4e11d9d8eacaad15238f6db87aac662792bfb5bb8d107cf881b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3922b8063d34fec31e4604c8092ccf0accacf56243a1f2dd627f3895d22b03b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec3c03ae2a7a35286498346c37fb21c17f6ab693d01576c0f040df5fc16a143\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9465794198b5750536edd5726bf8fa0a34e2a9413821e8b8fef9ce31d88fce4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b926932385e35861d5982f3754124c49fc3e7a4f52653b22a6ca12ab32ea98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8350773c1b4a29d263540e1fc02ade222486489acb0a6ed205bdebcdbed979a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8350773c1b4a29d263540e1fc02ade222486489acb0a6ed205bdebcdbed979a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zwdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r298b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.105562 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n82kv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13840804-f114-41e8-947a-df4f0a6ec1ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b857023ef94660ceec7ec3b1da419dccfd47d1a99434a1f4d7d6e1fea4d92112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gs6jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n82kv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.123394 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a5c17d-dc36-4bc6-97c3-70723f87e17d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ba024cf540fd521be6ad22ee118d4fcdc7e38642d2eac86bb0d78aec3b6658e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7880e77761692416d9d8d5482860e21c594b83f6627009f83939b576998d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74de918429279895f641a9427303c2cdc7fcf4b2c010a4544e6fbe784ba9b60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://defa202dee001828a2f06a9eac3136753fd9c212f1241af12575ec70c6283c86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:44:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.135705 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.154060 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4817e744-ce93-48b6-8642-f3ae31d2db1b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f71223ef021366aa35163e7c0299ce039646e384f38ec0ea69aae3a80750737\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqbtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xhkgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:29Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.182030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.182070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.182099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.182118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.182130 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.284802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.285022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.285081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.285163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.285223 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.387648 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.388300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.388348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.388380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.388398 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.491414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.491478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.491495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.491519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.491537 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.501795 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.501832 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.501903 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:29 crc kubenswrapper[4760]: E1227 05:45:29.501976 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.501994 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:29 crc kubenswrapper[4760]: E1227 05:45:29.502086 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:29 crc kubenswrapper[4760]: E1227 05:45:29.502356 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:29 crc kubenswrapper[4760]: E1227 05:45:29.502558 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.595268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.595339 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.595364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.595410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.595433 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.698882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.699269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.699283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.699300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.699312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.802684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.802786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.802811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.802850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.802877 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.882722 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d0ac108d91d871b394be66741e988b8f983c61117c5fa097aa0c4f94ab39797e"} Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.905517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.905785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.905962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.906180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:29 crc kubenswrapper[4760]: I1227 05:45:29.906309 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:29Z","lastTransitionTime":"2025-12-27T05:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.009247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.009293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.009306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.009326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.009339 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.112470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.112513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.112524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.112542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.112553 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.214881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.214947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.214966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.214990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.215011 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.322272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.322330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.322349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.322372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.322391 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.425868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.425934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.425958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.425989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.426009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.529470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.529527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.529544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.529572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.529591 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.633575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.634027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.634228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.634365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.634481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.737962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.738048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.738072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.738152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.738195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.841672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.841743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.841767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.841799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.841821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.945366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.945428 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.945450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.945480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:30 crc kubenswrapper[4760]: I1227 05:45:30.945502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:30Z","lastTransitionTime":"2025-12-27T05:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.068766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.068811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.068823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.068843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.068862 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.172484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.172524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.172533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.172550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.172560 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.274436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.274473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.274481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.274494 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.274503 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.376983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.377034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.377047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.377065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.377076 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.480741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.480798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.480813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.480832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.480845 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.502435 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.502533 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.502566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:31 crc kubenswrapper[4760]: E1227 05:45:31.502734 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.502750 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:31 crc kubenswrapper[4760]: E1227 05:45:31.502855 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:31 crc kubenswrapper[4760]: E1227 05:45:31.502959 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:31 crc kubenswrapper[4760]: E1227 05:45:31.503032 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.583529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.583564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.583576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.583593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.583624 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.686404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.686440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.686451 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.686465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.686476 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.789877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.789947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.789976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.790011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.790036 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.892677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.892736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.892753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.892777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.892796 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.897047 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r298b" event={"ID":"068e7548-398e-4313-a68a-fc4dcc88fbc6","Type":"ContainerStarted","Data":"db9adcb5b849b0c45801054fcfad4453e4e41c456dc4bddda0e8f5f4c6470057"} Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.911991 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35c64f80-15a0-460b-b301-267f1b1a2179\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c964e2175a15679ba843867ceea8b35202d18dca46cc0876481c42c3a013c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275be3530c8f2b99f6579c92c78338c4a0dbc97e2fed8346bfc560f5bdf5774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd5172053c5865844cfe174f94a547c4da5c12116e3ccbd8fcf37939378fa2fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:44:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:31Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.926601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:31Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.944356 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478b5920679d399240a9d765a45cfbadda8fe5fa81b865770d5ec3a310c1b66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:31Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.959940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-t57k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67571db3-3f43-4589-bf18-a42b6ea3da12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b714ababb0c141331d002ccc1f887dbc134b5d96b9587cb2dba6b1267892ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-t57k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:31Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.987242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c8aa557-e11a-4c40-9179-22811f44ff18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-27T05:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-27T05:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8g5xx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rmm9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:31Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.995635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.995679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.995692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.995714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:31 crc kubenswrapper[4760]: I1227 05:45:31.995726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:31Z","lastTransitionTime":"2025-12-27T05:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.007584 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9eac231-cf7e-4b8a-88c6-45cf768b3bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-27T05:45:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27c5e42d64ff0b483e5fc6e0da72ec9b58bb56b36fe0bab6bfd1c4fa0ec6fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a6f502c731120560d08c5ddfe10597ad2127a6ce4ea69fca5b7e7ed51bf9f99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-27T05:45:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xxsvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-27T05:45:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7cs9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-27T05:45:32Z is after 2025-08-24T17:21:41Z" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.015139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:32 crc kubenswrapper[4760]: E1227 05:45:32.015294 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:32 crc kubenswrapper[4760]: E1227 05:45:32.015354 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs podName:d7ad49d7-d1ed-4414-8a10-778d020e1da5 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.015336516 +0000 UTC m=+70.775405841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs") pod "network-metrics-daemon-bxmb9" (UID: "d7ad49d7-d1ed-4414-8a10-778d020e1da5") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.099591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.099650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.099671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.099699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.099719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.114702 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n82kv" podStartSLOduration=31.114671447 podStartE2EDuration="31.114671447s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.114335469 +0000 UTC m=+54.874404794" watchObservedRunningTime="2025-12-27 05:45:32.114671447 +0000 UTC m=+54.874740762" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.137446 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=31.137420237 podStartE2EDuration="31.137420237s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.137175151 +0000 UTC m=+54.897244486" watchObservedRunningTime="2025-12-27 05:45:32.137420237 +0000 UTC m=+54.897489552" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.155295 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fmk6w" podStartSLOduration=31.155272981 podStartE2EDuration="31.155272981s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.155052366 +0000 UTC m=+54.915121681" watchObservedRunningTime="2025-12-27 05:45:32.155272981 +0000 UTC m=+54.915342296" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.185400 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=5.185382398 podStartE2EDuration="5.185382398s" podCreationTimestamp="2025-12-27 05:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.184317192 +0000 UTC m=+54.944386497" watchObservedRunningTime="2025-12-27 05:45:32.185382398 +0000 UTC m=+54.945451713" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.201698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.201725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.201734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.201745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.201755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.236999 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podStartSLOduration=31.236976484 podStartE2EDuration="31.236976484s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.230490309 +0000 UTC m=+54.990559654" watchObservedRunningTime="2025-12-27 05:45:32.236976484 +0000 UTC m=+54.997045799" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.300223 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-r298b" podStartSLOduration=31.300207156 podStartE2EDuration="31.300207156s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.286424319 +0000 UTC m=+55.046493634" watchObservedRunningTime="2025-12-27 05:45:32.300207156 +0000 UTC m=+55.060276471" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.303887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.303936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.303954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.303971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.303983 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.323079 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-t57k6" podStartSLOduration=31.323064729 podStartE2EDuration="31.323064729s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.308249057 +0000 UTC m=+55.068318372" watchObservedRunningTime="2025-12-27 05:45:32.323064729 +0000 UTC m=+55.083134044" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.323391 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=25.323385437 podStartE2EDuration="25.323385437s" podCreationTimestamp="2025-12-27 05:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.323154891 +0000 UTC m=+55.083224206" watchObservedRunningTime="2025-12-27 05:45:32.323385437 +0000 UTC m=+55.083454752" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.365919 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podStartSLOduration=31.365901368 podStartE2EDuration="31.365901368s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.356417652 +0000 UTC m=+55.116486977" watchObservedRunningTime="2025-12-27 05:45:32.365901368 +0000 UTC m=+55.125970683" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.366120 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7cs9m" podStartSLOduration=30.366116523 podStartE2EDuration="30.366116523s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:32.365702672 +0000 UTC m=+55.125771987" watchObservedRunningTime="2025-12-27 05:45:32.366116523 +0000 UTC m=+55.126185838" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.405729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.405776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.405787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.405806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.405819 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.507650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.507684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.507693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.507707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.507717 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.610290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.610338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.610353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.610373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.610387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.713237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.713308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.713328 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.713352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.713372 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.816501 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.816557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.816573 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.816725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.816753 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.903262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovnkube-controller/0.log" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.907076 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" exitCode=1 Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.907132 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.908139 4760 scope.go:117] "RemoveContainer" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.918902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.918950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.918966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.918989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:32 crc kubenswrapper[4760]: I1227 05:45:32.919006 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:32Z","lastTransitionTime":"2025-12-27T05:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.021960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.021989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.021997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.022012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.022021 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.124908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.125317 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.125542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.125756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.125947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.228958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.228994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.229008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.229036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.229052 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.354384 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.354714 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:46:05.354694836 +0000 UTC m=+88.114764141 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.354771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.354807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.354829 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.354851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.354910 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.354917 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.354956 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:46:05.354945842 +0000 UTC m=+88.115015157 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.354971 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-27 05:46:05.354964042 +0000 UTC m=+88.115033357 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355035 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355047 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355050 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355057 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355063 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355073 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355079 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-27 05:46:05.355073035 +0000 UTC m=+88.115142350 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.355123 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-27 05:46:05.355111186 +0000 UTC m=+88.115180511 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.356392 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.356419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.356431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.356447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.356457 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.458557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.458597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.458606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.458621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.458629 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.502551 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.502665 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.502869 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.502673 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.503012 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.503178 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.503290 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:33 crc kubenswrapper[4760]: E1227 05:45:33.503434 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.562025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.562132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.562163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.562193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.562217 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.665139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.665175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.665184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.665199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.665210 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.767511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.767557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.767568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.767585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.767595 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.870477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.870541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.870560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.870584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.870600 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.912142 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovnkube-controller/0.log" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.914172 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerStarted","Data":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.915072 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.973298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.973331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.973343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.973361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:33 crc kubenswrapper[4760]: I1227 05:45:33.973374 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:33Z","lastTransitionTime":"2025-12-27T05:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.075496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.075537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.075549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.075565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.075578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.116399 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bxmb9"] Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.116528 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:34 crc kubenswrapper[4760]: E1227 05:45:34.116637 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.178375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.178423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.178436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.178455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.178467 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.281213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.281287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.281303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.281322 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.281335 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.384410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.384487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.384503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.384526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.384542 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.487178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.487218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.487231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.487249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.487262 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.589819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.589852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.589863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.589879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.589889 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.691973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.692012 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.692021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.692036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.692047 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.794182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.794242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.794259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.794284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.794303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.897606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.897677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.897701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.897730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:34 crc kubenswrapper[4760]: I1227 05:45:34.897751 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:34Z","lastTransitionTime":"2025-12-27T05:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.001506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.001585 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.001603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.001630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.001648 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.104953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.105021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.105048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.105079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.105149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.208312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.208376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.208394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.208419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.208436 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.311134 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.311195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.311213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.311239 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.311257 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.414133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.414192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.414211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.414238 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.414259 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.501931 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.502046 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.502046 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:35 crc kubenswrapper[4760]: E1227 05:45:35.502157 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.502193 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:35 crc kubenswrapper[4760]: E1227 05:45:35.502321 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 27 05:45:35 crc kubenswrapper[4760]: E1227 05:45:35.502539 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 27 05:45:35 crc kubenswrapper[4760]: E1227 05:45:35.502699 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bxmb9" podUID="d7ad49d7-d1ed-4414-8a10-778d020e1da5" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.516809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.516864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.516889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.516918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.516940 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.620546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.620639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.620666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.620698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.620722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.724047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.724153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.724172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.724199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.724217 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.827457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.827520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.827536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.827561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.827577 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.929723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.929781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.929798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.929822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:35 crc kubenswrapper[4760]: I1227 05:45:35.929841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:35Z","lastTransitionTime":"2025-12-27T05:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.032623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.032768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.032793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.032826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.032844 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:36Z","lastTransitionTime":"2025-12-27T05:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.136520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.136579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.136618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.136645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.136662 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:36Z","lastTransitionTime":"2025-12-27T05:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.240140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.240200 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.240218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.240242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.240260 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:36Z","lastTransitionTime":"2025-12-27T05:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.343794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.343862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.343882 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.343907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.343925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:36Z","lastTransitionTime":"2025-12-27T05:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.447044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.447173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.447198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.447227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.447248 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:36Z","lastTransitionTime":"2025-12-27T05:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.550685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.550736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.550749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.550766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.550779 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-27T05:45:36Z","lastTransitionTime":"2025-12-27T05:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.653385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.653475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.653503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.653530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.653680 4760 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.708342 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9wjz8"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.708738 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zl9wx"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.709007 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.709395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.710981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wsxtc"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.711599 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.717775 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.719244 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.719342 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.719410 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.722712 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.723057 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.723346 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.723745 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.724385 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.724667 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.757736 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.759000 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.759309 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.761715 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.762072 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.774507 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.774850 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.774877 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.774957 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.775301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.776173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.774700 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x4v5w"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.779425 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.780033 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.780205 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pdd96"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.780376 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.780576 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.781159 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.781170 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.781295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.781456 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.781542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.782276 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.782314 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.782338 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.782452 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.782536 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.782854 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.783854 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffbsv"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.784445 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.785825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.786739 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.786966 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787070 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787187 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787315 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787417 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787511 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787641 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787736 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787831 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.787907 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.788111 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.788220 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.789196 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.789729 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.789797 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.790484 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-g5866"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.791005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.796117 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.796487 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.796766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.796865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.796965 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.797210 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.797373 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.797652 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.797843 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.797991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.798150 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.798586 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.801035 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2x9wk"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.801560 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.801569 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.801731 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.803756 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5p2sr"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.804280 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzs8s"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.804458 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.804589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.805745 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mvf5k"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.807340 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.807669 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.808434 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.808831 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.808964 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809082 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809458 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809551 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809588 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cmmr9"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809656 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.816037 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.816132 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.809423 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.818139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.818626 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.818778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.819221 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.821899 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.822059 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.822568 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.822809 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.823077 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.823214 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.823380 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.823503 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.823889 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.824118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.824270 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.824610 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.824821 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.830896 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.831378 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.831638 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.853237 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.853785 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-image-import-ca\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.853948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32612889-890b-4efb-a777-8ad13a778841-config\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854053 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6k5\" (UniqueName: \"kubernetes.io/projected/cf3096b8-b961-454b-9647-ac2b9d3868ca-kube-api-access-qx6k5\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854180 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-config\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854184 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854639 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854289 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.856208 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-serving-cert\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.856778 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf3096b8-b961-454b-9647-ac2b9d3868ca-audit-dir\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.856878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-audit\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.856974 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c14515f-ee0e-4560-bed2-7ef5160b61ec-serving-cert\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857219 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-etcd-client\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857313 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-encryption-config\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-config\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-client-ca\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3096b8-b961-454b-9647-ac2b9d3868ca-node-pullsecrets\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857732 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd89p\" (UniqueName: \"kubernetes.io/projected/32612889-890b-4efb-a777-8ad13a778841-kube-api-access-hd89p\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.857950 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.858038 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sksfr\" (UniqueName: \"kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.858167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/32612889-890b-4efb-a777-8ad13a778841-images\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.858386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/32612889-890b-4efb-a777-8ad13a778841-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.856982 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.856897 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854723 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.854611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.855202 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.859496 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.859922 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.860698 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tqbx6"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.860911 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.861129 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.861270 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.861369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.861390 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.861654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.862153 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.862653 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.862987 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.863164 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.863511 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.864883 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.865593 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.865941 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.866840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.868159 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.868880 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.869051 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.869554 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.869878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.870008 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.870226 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-phfqm"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.870630 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.871562 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.872156 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.875819 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.876310 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.876407 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.876548 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.881211 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.881380 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.885709 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w927h"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.885893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.887307 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.888984 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.889481 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.890508 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6cgcs"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.891143 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.891684 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.893071 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.893578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.893820 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.894342 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.895189 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.895780 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.896627 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.898185 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zl9wx"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.898278 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.899953 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9wjz8"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.901263 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.902063 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.902587 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.903119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.903990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.905132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.907228 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wsxtc"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.908627 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-h4drj"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.909487 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.909808 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.912441 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.912577 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pdd96"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.917235 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffbsv"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.918996 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.921075 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kkb4n"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.921739 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.923404 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.925028 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x4v5w"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.926600 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.931158 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-g5866"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.931874 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2x9wk"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.932128 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.932583 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.933245 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.934052 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.934878 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cmmr9"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.935757 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5p2sr"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.936615 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.937561 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.938705 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.940595 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-phfqm"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.941936 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.943330 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tqbx6"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.944635 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w927h"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.946127 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.947409 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.948679 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.949919 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.951663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.954367 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.957444 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzs8s"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3096b8-b961-454b-9647-ac2b9d3868ca-node-pullsecrets\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959057 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959102 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/32612889-890b-4efb-a777-8ad13a778841-images\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd89p\" (UniqueName: \"kubernetes.io/projected/32612889-890b-4efb-a777-8ad13a778841-kube-api-access-hd89p\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959143 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sksfr\" (UniqueName: \"kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/32612889-890b-4efb-a777-8ad13a778841-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-image-import-ca\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959237 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85882885-093a-4277-80b0-c8db3141030f-serving-cert\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32612889-890b-4efb-a777-8ad13a778841-config\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6k5\" (UniqueName: \"kubernetes.io/projected/cf3096b8-b961-454b-9647-ac2b9d3868ca-kube-api-access-qx6k5\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959314 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-config\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959321 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959334 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-serving-cert\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959368 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-config\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.959530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf3096b8-b961-454b-9647-ac2b9d3868ca-node-pullsecrets\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960167 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32612889-890b-4efb-a777-8ad13a778841-config\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960488 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/32612889-890b-4efb-a777-8ad13a778841-images\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf3096b8-b961-454b-9647-ac2b9d3868ca-audit-dir\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960744 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-image-import-ca\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.960965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-config\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.961026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf3096b8-b961-454b-9647-ac2b9d3868ca-audit-dir\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.961084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-audit\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.961131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.961235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c14515f-ee0e-4560-bed2-7ef5160b61ec-serving-cert\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.961607 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-etcd-client\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.961711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf3096b8-b961-454b-9647-ac2b9d3868ca-audit\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.962164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-encryption-config\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.962242 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.962257 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-config\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.962296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-client-ca\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.962374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kg7\" (UniqueName: \"kubernetes.io/projected/85882885-093a-4277-80b0-c8db3141030f-kube-api-access-g2kg7\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.963048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.964085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-client-ca\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.964400 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-config\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.965177 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6cgcs"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.966368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-etcd-client\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.966596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/32612889-890b-4efb-a777-8ad13a778841-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.967068 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c14515f-ee0e-4560-bed2-7ef5160b61ec-serving-cert\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.966613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-encryption-config\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.967443 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.968566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf3096b8-b961-454b-9647-ac2b9d3868ca-serving-cert\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.968671 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l26l5"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.969691 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.970858 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-p9vf5"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.971152 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.971915 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.972473 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p9vf5"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.973474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l26l5"] Dec 27 05:45:36 crc kubenswrapper[4760]: I1227 05:45:36.990976 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.011223 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.012557 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cdjdr"] Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.013292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.021925 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cdjdr"] Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.051416 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.062862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85882885-093a-4277-80b0-c8db3141030f-serving-cert\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.062952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.063037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-config\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.063726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.063750 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-config\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.064784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kg7\" (UniqueName: \"kubernetes.io/projected/85882885-093a-4277-80b0-c8db3141030f-kube-api-access-g2kg7\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.064897 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.065635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85882885-093a-4277-80b0-c8db3141030f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.067909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85882885-093a-4277-80b0-c8db3141030f-serving-cert\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.071561 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.091263 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.111633 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.132737 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.152587 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.171271 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.190835 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.210939 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.237063 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.251389 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.271328 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.292298 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.312016 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.331272 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.351367 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.372416 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.395951 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.413287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.432172 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.452438 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.471590 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.492894 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.501743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.501750 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.501807 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.501809 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.526779 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.531816 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.573082 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.591426 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.612804 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.632392 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.652612 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.676624 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.691695 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.712416 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.732152 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.752683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.771861 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.791653 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.812866 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.832683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.853260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.873256 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.890016 4760 request.go:700] Waited for 1.017533782s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.892674 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.927788 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.931347 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.952934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.973284 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 27 05:45:37 crc kubenswrapper[4760]: I1227 05:45:37.992322 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.012439 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.041403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.051671 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.072371 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.092109 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.111909 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.132958 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.152146 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.173358 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.193203 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.214504 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.231984 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.251992 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.271349 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.291270 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.312341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.332702 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.352181 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.371463 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.391913 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.412507 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.435122 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.451906 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.472392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.491796 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.512520 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.532746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.552182 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.572650 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.592425 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.612287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.632366 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.652139 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.731967 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.752455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.771878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.793045 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.811740 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.831698 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.854457 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.872768 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.891600 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.910163 4760 request.go:700] Waited for 1.896392988s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.911870 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.970984 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.991321 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c470872-3eb8-4afc-a357-b225ba9b6c94-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996429 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-stats-auth\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-serving-cert\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-dir\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996571 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2z9\" (UniqueName: \"kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchcj\" (UniqueName: \"kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.996871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982gl\" (UniqueName: \"kubernetes.io/projected/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-kube-api-access-982gl\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-machine-approver-tls\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-metrics-certs\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhm94\" (UniqueName: \"kubernetes.io/projected/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-kube-api-access-mhm94\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: E1227 05:45:38.997392 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.497366516 +0000 UTC m=+62.257435871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997633 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-trusted-ca\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997738 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-default-certificate\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997785 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997870 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhgt\" (UniqueName: \"kubernetes.io/projected/8c470872-3eb8-4afc-a357-b225ba9b6c94-kube-api-access-shhgt\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-serving-cert\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.997966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-bound-sa-token\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w57dj\" (UniqueName: \"kubernetes.io/projected/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-kube-api-access-w57dj\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64tmq\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-kube-api-access-64tmq\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-client\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998497 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-service-ca-bundle\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998619 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64a3b986-ea3d-4cbb-83ba-44971b220664-serving-cert\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsxh\" (UniqueName: \"kubernetes.io/projected/0015afce-dba1-4f1d-a3d3-5f9abe477e43-kube-api-access-bwsxh\") pod \"downloads-7954f5f757-g5866\" (UID: \"0015afce-dba1-4f1d-a3d3-5f9abe477e43\") " pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998740 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-config\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998770 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/364f3367-d5ab-4345-9c3f-bb7529d76c6f-serving-cert\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.998825 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dlz\" (UniqueName: \"kubernetes.io/projected/364f3367-d5ab-4345-9c3f-bb7529d76c6f-kube-api-access-s4dlz\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999332 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-config\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-oauth-serving-cert\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp9vm\" (UniqueName: \"kubernetes.io/projected/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-kube-api-access-bp9vm\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clt6p\" (UniqueName: \"kubernetes.io/projected/64a3b986-ea3d-4cbb-83ba-44971b220664-kube-api-access-clt6p\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-config\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-config\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf23480c-73e2-4c48-b39c-92ef17211274-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-trusted-ca\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-client\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:38 crc kubenswrapper[4760]: I1227 05:45:38.999841 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/64a3b986-ea3d-4cbb-83ba-44971b220664-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:38.999885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-client-ca\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:38.999967 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-oauth-config\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-auth-proxy-config\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000061 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-audit-policies\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-audit-dir\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000126 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000149 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-policies\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000188 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfx4n\" (UniqueName: \"kubernetes.io/projected/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-kube-api-access-pfx4n\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrx2\" (UniqueName: \"kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-registry-tls\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-serving-cert\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000412 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000491 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c470872-3eb8-4afc-a357-b225ba9b6c94-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000539 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p22g\" (UniqueName: \"kubernetes.io/projected/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-kube-api-access-9p22g\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-encryption-config\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000722 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-registry-certificates\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-config\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.000815 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-service-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.010978 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.031674 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.052318 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.070905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101321 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.101486 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.60144984 +0000 UTC m=+62.361519195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64a3b986-ea3d-4cbb-83ba-44971b220664-serving-cert\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101673 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33f25b04-37cd-4724-9db9-b1816dfb71bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-config\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct26\" (UniqueName: \"kubernetes.io/projected/67b40739-4e24-41b5-9d6a-7ab19939c81c-kube-api-access-rct26\") pod \"package-server-manager-789f6589d5-f9q5z\" (UID: \"67b40739-4e24-41b5-9d6a-7ab19939c81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101812 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-signing-key\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffgl\" (UniqueName: \"kubernetes.io/projected/e75b3601-c0ac-4de5-9847-52a8c087a6f9-kube-api-access-gffgl\") pod \"multus-admission-controller-857f4d67dd-phfqm\" (UID: \"e75b3601-c0ac-4de5-9847-52a8c087a6f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55aab342-3de1-46cf-8d85-d71345fd1538-metrics-tls\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-registration-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101933 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-plugins-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.101979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp9vm\" (UniqueName: \"kubernetes.io/projected/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-kube-api-access-bp9vm\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clt6p\" (UniqueName: \"kubernetes.io/projected/64a3b986-ea3d-4cbb-83ba-44971b220664-kube-api-access-clt6p\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-config\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-config\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102153 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55aab342-3de1-46cf-8d85-d71345fd1538-config-volume\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102197 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf23480c-73e2-4c48-b39c-92ef17211274-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-client\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/64a3b986-ea3d-4cbb-83ba-44971b220664-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-config\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-client-ca\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/33f25b04-37cd-4724-9db9-b1816dfb71bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102399 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdm4\" (UniqueName: \"kubernetes.io/projected/9e844584-fa56-4ff9-b454-bcb89ae547db-kube-api-access-5qdm4\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-oauth-config\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a5980c3-ee00-41b1-9707-f349149a53c4-proxy-tls\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102513 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-audit-policies\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-audit-dir\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-auth-proxy-config\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102731 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-audit-dir\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.102997 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjrl\" (UniqueName: \"kubernetes.io/projected/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-kube-api-access-7vjrl\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103048 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/385068c5-bdbb-41fe-b1bc-1597b2a461ea-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103038 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/64a3b986-ea3d-4cbb-83ba-44971b220664-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103263 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmrx2\" (UniqueName: \"kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf23480c-73e2-4c48-b39c-92ef17211274-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc145080-11cc-455d-b8d4-7baab6859228-srv-cert\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103407 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-registry-tls\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5493a447-e04c-4f7b-b8ed-6816543ee631-tmpfs\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a00e2aaf-ac7c-4672-93d5-fc662e271b41-srv-cert\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103511 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1521745e-1f44-4a25-8eff-05062c2c24ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p22g\" (UniqueName: \"kubernetes.io/projected/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-kube-api-access-9p22g\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103668 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-config\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-serving-cert\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-dir\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-serving-cert\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103834 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77jp9\" (UniqueName: \"kubernetes.io/projected/9a5980c3-ee00-41b1-9707-f349149a53c4-kube-api-access-77jp9\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.103891 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-dir\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tchcj\" (UniqueName: \"kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xssq2\" (UniqueName: \"kubernetes.io/projected/3cacaf44-3b98-45b6-9a51-e41e33c4679d-kube-api-access-xssq2\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhm94\" (UniqueName: \"kubernetes.io/projected/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-kube-api-access-mhm94\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.104957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7cdk\" (UniqueName: \"kubernetes.io/projected/bc145080-11cc-455d-b8d4-7baab6859228-kube-api-access-t7cdk\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105060 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-default-certificate\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105145 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105369 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27npl\" (UniqueName: \"kubernetes.io/projected/55aab342-3de1-46cf-8d85-d71345fd1538-kube-api-access-27npl\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-serving-cert\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105479 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-bound-sa-token\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmmd\" (UniqueName: \"kubernetes.io/projected/911b180f-4536-4181-956b-abd6e2c8e0d0-kube-api-access-cdmmd\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-signing-cabundle\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105679 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e844584-fa56-4ff9-b454-bcb89ae547db-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/385068c5-bdbb-41fe-b1bc-1597b2a461ea-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-csi-data-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105843 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a00e2aaf-ac7c-4672-93d5-fc662e271b41-profile-collector-cert\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.105892 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3cacaf44-3b98-45b6-9a51-e41e33c4679d-node-bootstrap-token\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106163 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-client\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/67b40739-4e24-41b5-9d6a-7ab19939c81c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f9q5z\" (UID: \"67b40739-4e24-41b5-9d6a-7ab19939c81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106304 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385068c5-bdbb-41fe-b1bc-1597b2a461ea-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106361 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsxh\" (UniqueName: \"kubernetes.io/projected/0015afce-dba1-4f1d-a3d3-5f9abe477e43-kube-api-access-bwsxh\") pod \"downloads-7954f5f757-g5866\" (UID: \"0015afce-dba1-4f1d-a3d3-5f9abe477e43\") " pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/364f3367-d5ab-4345-9c3f-bb7529d76c6f-serving-cert\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106504 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dlz\" (UniqueName: \"kubernetes.io/projected/364f3367-d5ab-4345-9c3f-bb7529d76c6f-kube-api-access-s4dlz\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp55s\" (UniqueName: \"kubernetes.io/projected/2f024673-515b-451c-b19c-f542b4cebba9-kube-api-access-hp55s\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.106639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rsv\" (UniqueName: \"kubernetes.io/projected/7f4d886c-9f72-4358-8ddb-f820f7181639-kube-api-access-58rsv\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/385068c5-bdbb-41fe-b1bc-1597b2a461ea-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107199 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-oauth-serving-cert\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62b0425e-20fc-41c8-99c0-cdccdc48d766-config-volume\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107298 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-config\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107343 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-trusted-ca\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5493a447-e04c-4f7b-b8ed-6816543ee631-webhook-cert\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e844584-fa56-4ff9-b454-bcb89ae547db-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107541 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-ready\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc145080-11cc-455d-b8d4-7baab6859228-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107653 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e75b3601-c0ac-4de5-9847-52a8c087a6f9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-phfqm\" (UID: \"e75b3601-c0ac-4de5-9847-52a8c087a6f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.107726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-socket-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108519 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-policies\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108685 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfx4n\" (UniqueName: \"kubernetes.io/projected/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-kube-api-access-pfx4n\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33f25b04-37cd-4724-9db9-b1816dfb71bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108873 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0478c05-e6ed-4970-ad35-5ef3904f94c9-cert\") pod \"ingress-canary-cdjdr\" (UID: \"d0478c05-e6ed-4970-ad35-5ef3904f94c9\") " pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.108904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/911b180f-4536-4181-956b-abd6e2c8e0d0-metrics-tls\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.109360 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.609334447 +0000 UTC m=+62.369403892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.109847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-serving-cert\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a5980c3-ee00-41b1-9707-f349149a53c4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdcp\" (UniqueName: \"kubernetes.io/projected/db38a69b-db55-4829-8258-bf3da32477ac-kube-api-access-9vdcp\") pod \"control-plane-machine-set-operator-78cbb6b69f-zfw4n\" (UID: \"db38a69b-db55-4829-8258-bf3da32477ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110367 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1521745e-1f44-4a25-8eff-05062c2c24ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110426 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-config\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110607 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-encryption-config\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c470872-3eb8-4afc-a357-b225ba9b6c94-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110701 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh5dp\" (UniqueName: \"kubernetes.io/projected/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-kube-api-access-mh5dp\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110787 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-registry-certificates\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-service-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dssdc\" (UniqueName: \"kubernetes.io/projected/5493a447-e04c-4f7b-b8ed-6816543ee631-kube-api-access-dssdc\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.110940 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1521745e-1f44-4a25-8eff-05062c2c24ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-stats-auth\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c470872-3eb8-4afc-a357-b225ba9b6c94-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111135 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjm6t\" (UniqueName: \"kubernetes.io/projected/33f25b04-37cd-4724-9db9-b1816dfb71bc-kube-api-access-xjm6t\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111257 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2z9\" (UniqueName: \"kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982gl\" (UniqueName: \"kubernetes.io/projected/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-kube-api-access-982gl\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.111822 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db38a69b-db55-4829-8258-bf3da32477ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zfw4n\" (UID: \"db38a69b-db55-4829-8258-bf3da32477ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112191 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqfp\" (UniqueName: \"kubernetes.io/projected/1521745e-1f44-4a25-8eff-05062c2c24ef-kube-api-access-pvqfp\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385068c5-bdbb-41fe-b1bc-1597b2a461ea-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112439 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-machine-approver-tls\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112474 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-metrics-certs\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112509 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112555 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112662 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-proxy-tls\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-config\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112795 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112864 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-trusted-ca\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhgt\" (UniqueName: \"kubernetes.io/projected/8c470872-3eb8-4afc-a357-b225ba9b6c94-kube-api-access-shhgt\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.112934 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5493a447-e04c-4f7b-b8ed-6816543ee631-apiservice-cert\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113169 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzvw\" (UniqueName: \"kubernetes.io/projected/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-kube-api-access-lxzvw\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297cc\" (UniqueName: \"kubernetes.io/projected/a00e2aaf-ac7c-4672-93d5-fc662e271b41-kube-api-access-297cc\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w57dj\" (UniqueName: \"kubernetes.io/projected/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-kube-api-access-w57dj\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113327 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113368 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-mountpoint-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113406 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfgz6\" (UniqueName: \"kubernetes.io/projected/d0478c05-e6ed-4970-ad35-5ef3904f94c9-kube-api-access-lfgz6\") pod \"ingress-canary-cdjdr\" (UID: \"d0478c05-e6ed-4970-ad35-5ef3904f94c9\") " pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113418 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-registry-certificates\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64tmq\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-kube-api-access-64tmq\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62b0425e-20fc-41c8-99c0-cdccdc48d766-secret-volume\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqm5w\" (UniqueName: \"kubernetes.io/projected/62b0425e-20fc-41c8-99c0-cdccdc48d766-kube-api-access-kqm5w\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-service-ca-bundle\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.113918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.114078 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.114171 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87jg\" (UniqueName: \"kubernetes.io/projected/e90733f1-2aa6-4487-9212-1f21cc77bea4-kube-api-access-f87jg\") pod \"migrator-59844c95c7-d8hds\" (UID: \"e90733f1-2aa6-4487-9212-1f21cc77bea4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.114210 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a5980c3-ee00-41b1-9707-f349149a53c4-images\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.114310 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmx8\" (UniqueName: \"kubernetes.io/projected/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-kube-api-access-xdmx8\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.114385 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3cacaf44-3b98-45b6-9a51-e41e33c4679d-certs\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.132219 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.152739 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.172354 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.192166 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.210182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64a3b986-ea3d-4cbb-83ba-44971b220664-serving-cert\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.211606 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.213618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-config\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.224492 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.224780 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.724732899 +0000 UTC m=+62.484802274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.224994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh5dp\" (UniqueName: \"kubernetes.io/projected/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-kube-api-access-mh5dp\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225081 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-config\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225207 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dssdc\" (UniqueName: \"kubernetes.io/projected/5493a447-e04c-4f7b-b8ed-6816543ee631-kube-api-access-dssdc\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1521745e-1f44-4a25-8eff-05062c2c24ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjm6t\" (UniqueName: \"kubernetes.io/projected/33f25b04-37cd-4724-9db9-b1816dfb71bc-kube-api-access-xjm6t\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225559 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqfp\" (UniqueName: \"kubernetes.io/projected/1521745e-1f44-4a25-8eff-05062c2c24ef-kube-api-access-pvqfp\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385068c5-bdbb-41fe-b1bc-1597b2a461ea-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db38a69b-db55-4829-8258-bf3da32477ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zfw4n\" (UID: \"db38a69b-db55-4829-8258-bf3da32477ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-proxy-tls\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.225959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-config\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.226125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5493a447-e04c-4f7b-b8ed-6816543ee631-apiservice-cert\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.226183 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxzvw\" (UniqueName: \"kubernetes.io/projected/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-kube-api-access-lxzvw\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.226854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-config\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.226865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297cc\" (UniqueName: \"kubernetes.io/projected/a00e2aaf-ac7c-4672-93d5-fc662e271b41-kube-api-access-297cc\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-mountpoint-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227078 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfgz6\" (UniqueName: \"kubernetes.io/projected/d0478c05-e6ed-4970-ad35-5ef3904f94c9-kube-api-access-lfgz6\") pod \"ingress-canary-cdjdr\" (UID: \"d0478c05-e6ed-4970-ad35-5ef3904f94c9\") " pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqm5w\" (UniqueName: \"kubernetes.io/projected/62b0425e-20fc-41c8-99c0-cdccdc48d766-kube-api-access-kqm5w\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-mountpoint-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227297 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f87jg\" (UniqueName: \"kubernetes.io/projected/e90733f1-2aa6-4487-9212-1f21cc77bea4-kube-api-access-f87jg\") pod \"migrator-59844c95c7-d8hds\" (UID: \"e90733f1-2aa6-4487-9212-1f21cc77bea4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227364 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1521745e-1f44-4a25-8eff-05062c2c24ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227372 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62b0425e-20fc-41c8-99c0-cdccdc48d766-secret-volume\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227448 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a5980c3-ee00-41b1-9707-f349149a53c4-images\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmx8\" (UniqueName: \"kubernetes.io/projected/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-kube-api-access-xdmx8\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227584 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3cacaf44-3b98-45b6-9a51-e41e33c4679d-certs\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227655 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33f25b04-37cd-4724-9db9-b1816dfb71bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227766 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rct26\" (UniqueName: \"kubernetes.io/projected/67b40739-4e24-41b5-9d6a-7ab19939c81c-kube-api-access-rct26\") pod \"package-server-manager-789f6589d5-f9q5z\" (UID: \"67b40739-4e24-41b5-9d6a-7ab19939c81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227805 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-signing-key\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffgl\" (UniqueName: \"kubernetes.io/projected/e75b3601-c0ac-4de5-9847-52a8c087a6f9-kube-api-access-gffgl\") pod \"multus-admission-controller-857f4d67dd-phfqm\" (UID: \"e75b3601-c0ac-4de5-9847-52a8c087a6f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55aab342-3de1-46cf-8d85-d71345fd1538-metrics-tls\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.227919 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-plugins-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228006 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-registration-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55aab342-3de1-46cf-8d85-d71345fd1538-config-volume\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228146 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/33f25b04-37cd-4724-9db9-b1816dfb71bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228182 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdm4\" (UniqueName: \"kubernetes.io/projected/9e844584-fa56-4ff9-b454-bcb89ae547db-kube-api-access-5qdm4\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a5980c3-ee00-41b1-9707-f349149a53c4-proxy-tls\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228292 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjrl\" (UniqueName: \"kubernetes.io/projected/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-kube-api-access-7vjrl\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228326 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/385068c5-bdbb-41fe-b1bc-1597b2a461ea-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228363 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228443 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc145080-11cc-455d-b8d4-7baab6859228-srv-cert\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228502 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5493a447-e04c-4f7b-b8ed-6816543ee631-tmpfs\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a00e2aaf-ac7c-4672-93d5-fc662e271b41-srv-cert\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228570 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a5980c3-ee00-41b1-9707-f349149a53c4-images\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1521745e-1f44-4a25-8eff-05062c2c24ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-serving-cert\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77jp9\" (UniqueName: \"kubernetes.io/projected/9a5980c3-ee00-41b1-9707-f349149a53c4-kube-api-access-77jp9\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-plugins-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.228937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xssq2\" (UniqueName: \"kubernetes.io/projected/3cacaf44-3b98-45b6-9a51-e41e33c4679d-kube-api-access-xssq2\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7cdk\" (UniqueName: \"kubernetes.io/projected/bc145080-11cc-455d-b8d4-7baab6859228-kube-api-access-t7cdk\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229123 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27npl\" (UniqueName: \"kubernetes.io/projected/55aab342-3de1-46cf-8d85-d71345fd1538-kube-api-access-27npl\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmmd\" (UniqueName: \"kubernetes.io/projected/911b180f-4536-4181-956b-abd6e2c8e0d0-kube-api-access-cdmmd\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-signing-cabundle\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229284 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e844584-fa56-4ff9-b454-bcb89ae547db-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229318 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/385068c5-bdbb-41fe-b1bc-1597b2a461ea-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229351 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-csi-data-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229532 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a00e2aaf-ac7c-4672-93d5-fc662e271b41-profile-collector-cert\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3cacaf44-3b98-45b6-9a51-e41e33c4679d-node-bootstrap-token\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229654 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/67b40739-4e24-41b5-9d6a-7ab19939c81c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f9q5z\" (UID: \"67b40739-4e24-41b5-9d6a-7ab19939c81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229703 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385068c5-bdbb-41fe-b1bc-1597b2a461ea-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp55s\" (UniqueName: \"kubernetes.io/projected/2f024673-515b-451c-b19c-f542b4cebba9-kube-api-access-hp55s\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58rsv\" (UniqueName: \"kubernetes.io/projected/7f4d886c-9f72-4358-8ddb-f820f7181639-kube-api-access-58rsv\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.229940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/385068c5-bdbb-41fe-b1bc-1597b2a461ea-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62b0425e-20fc-41c8-99c0-cdccdc48d766-config-volume\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-config\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230075 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5493a447-e04c-4f7b-b8ed-6816543ee631-webhook-cert\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e844584-fa56-4ff9-b454-bcb89ae547db-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-ready\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230284 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc145080-11cc-455d-b8d4-7baab6859228-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230336 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e75b3601-c0ac-4de5-9847-52a8c087a6f9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-phfqm\" (UID: \"e75b3601-c0ac-4de5-9847-52a8c087a6f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-socket-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33f25b04-37cd-4724-9db9-b1816dfb71bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230541 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0478c05-e6ed-4970-ad35-5ef3904f94c9-cert\") pod \"ingress-canary-cdjdr\" (UID: \"d0478c05-e6ed-4970-ad35-5ef3904f94c9\") " pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/911b180f-4536-4181-956b-abd6e2c8e0d0-metrics-tls\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a5980c3-ee00-41b1-9707-f349149a53c4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdcp\" (UniqueName: \"kubernetes.io/projected/db38a69b-db55-4829-8258-bf3da32477ac-kube-api-access-9vdcp\") pod \"control-plane-machine-set-operator-78cbb6b69f-zfw4n\" (UID: \"db38a69b-db55-4829-8258-bf3da32477ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1521745e-1f44-4a25-8eff-05062c2c24ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.230892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.231204 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-registration-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.232228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55aab342-3de1-46cf-8d85-d71345fd1538-config-volume\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.233621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.233731 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.733700223 +0000 UTC m=+62.493769578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.237165 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-serving-cert\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.237388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-proxy-tls\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.237794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.238071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/385068c5-bdbb-41fe-b1bc-1597b2a461ea-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.238212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55aab342-3de1-46cf-8d85-d71345fd1538-metrics-tls\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.239280 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3cacaf44-3b98-45b6-9a51-e41e33c4679d-certs\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.239444 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5493a447-e04c-4f7b-b8ed-6816543ee631-tmpfs\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.239969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.240539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-socket-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.240587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d886c-9f72-4358-8ddb-f820f7181639-csi-data-dir\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.240676 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/33f25b04-37cd-4724-9db9-b1816dfb71bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.240904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-config\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.241417 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62b0425e-20fc-41c8-99c0-cdccdc48d766-secret-volume\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.241479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-ready\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.241970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a5980c3-ee00-41b1-9707-f349149a53c4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.242155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385068c5-bdbb-41fe-b1bc-1597b2a461ea-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.242826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62b0425e-20fc-41c8-99c0-cdccdc48d766-config-volume\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.243074 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/385068c5-bdbb-41fe-b1bc-1597b2a461ea-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.243867 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-signing-key\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.244268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-signing-cabundle\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.246405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db38a69b-db55-4829-8258-bf3da32477ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zfw4n\" (UID: \"db38a69b-db55-4829-8258-bf3da32477ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.246830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.247330 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/385068c5-bdbb-41fe-b1bc-1597b2a461ea-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.247763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.252369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a5980c3-ee00-41b1-9707-f349149a53c4-proxy-tls\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.252820 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.253843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5493a447-e04c-4f7b-b8ed-6816543ee631-apiservice-cert\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.254029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc145080-11cc-455d-b8d4-7baab6859228-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.254916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a00e2aaf-ac7c-4672-93d5-fc662e271b41-profile-collector-cert\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.255126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/67b40739-4e24-41b5-9d6a-7ab19939c81c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f9q5z\" (UID: \"67b40739-4e24-41b5-9d6a-7ab19939c81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.255174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5493a447-e04c-4f7b-b8ed-6816543ee631-webhook-cert\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.255648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e75b3601-c0ac-4de5-9847-52a8c087a6f9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-phfqm\" (UID: \"e75b3601-c0ac-4de5-9847-52a8c087a6f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.256718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1521745e-1f44-4a25-8eff-05062c2c24ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.258275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0478c05-e6ed-4970-ad35-5ef3904f94c9-cert\") pod \"ingress-canary-cdjdr\" (UID: \"d0478c05-e6ed-4970-ad35-5ef3904f94c9\") " pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.259751 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc145080-11cc-455d-b8d4-7baab6859228-srv-cert\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.260382 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a00e2aaf-ac7c-4672-93d5-fc662e271b41-srv-cert\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.260703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3cacaf44-3b98-45b6-9a51-e41e33c4679d-node-bootstrap-token\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.285675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.292716 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.293523 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-config\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.297530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-client\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.312342 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.313835 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-audit-policies\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.331993 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.332209 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.832182373 +0000 UTC m=+62.592251688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.332348 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.333462 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.833447233 +0000 UTC m=+62.593516568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.337980 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.344641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.353225 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.368704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-oauth-config\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.371804 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.373964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-auth-proxy-config\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.393781 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.404313 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-config\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.412418 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.414798 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-client-ca\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.431945 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.433605 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.434729 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:39.93471383 +0000 UTC m=+62.694783155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.434774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-config\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.471136 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.477890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-registry-tls\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.513516 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.516610 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-config\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.535801 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.536851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.537761 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.037727998 +0000 UTC m=+62.797797343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.545895 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.580173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-bound-sa-token\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.591609 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.602567 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.612502 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.620585 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.638898 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.639067 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.139038736 +0000 UTC m=+62.899108051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.639208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.639859 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.139787104 +0000 UTC m=+62.899856489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.672160 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.679178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-oauth-serving-cert\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.692348 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.703063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/364f3367-d5ab-4345-9c3f-bb7529d76c6f-serving-cert\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.712176 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.719010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.732390 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.742269 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.743400 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-client\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.743663 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.243640312 +0000 UTC m=+63.003709667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.743897 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.744370 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.244348898 +0000 UTC m=+63.004418243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.762256 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.771701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-trusted-ca\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.772834 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33f25b04-37cd-4724-9db9-b1816dfb71bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.773015 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.783922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.812112 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.820010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.831947 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.842408 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-serving-cert\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.844777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.844939 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.344905888 +0000 UTC m=+63.104975263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.845399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.845744 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.345732248 +0000 UTC m=+63.105801583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.852952 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.864651 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.871947 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.884404 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.892295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.902212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-serving-cert\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.911887 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.922758 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.932644 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.944246 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-default-certificate\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.947749 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.948390 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.448357497 +0000 UTC m=+63.208426852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.948885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:39 crc kubenswrapper[4760]: E1227 05:45:39.949638 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.449623717 +0000 UTC m=+63.209693062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.986522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 27 05:45:39 crc kubenswrapper[4760]: I1227 05:45:39.995590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.015340 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.020636 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-policies\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.031214 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.045727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-serving-cert\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.049817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.050209 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.550171087 +0000 UTC m=+63.310240432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.051167 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.064915 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.072020 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.086696 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-encryption-config\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.091827 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.102686 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c470872-3eb8-4afc-a357-b225ba9b6c94-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.111079 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.112074 4760 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.112141 4760 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.112214 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-service-ca podName:364f3367-d5ab-4345-9c3f-bb7529d76c6f nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.61218568 +0000 UTC m=+63.372255065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-service-ca") pod "etcd-operator-b45778765-5p2sr" (UID: "364f3367-d5ab-4345-9c3f-bb7529d76c6f") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.112244 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session podName:933d294b-c115-4bd3-ade2-1ae37665ae1b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.612231871 +0000 UTC m=+63.372301216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session") pod "oauth-openshift-558db77b4-x4v5w" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.113865 4760 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.113934 4760 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.113969 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca podName:3dfa4237-e979-4215-9f2c-20aa6303cae7 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.613938941 +0000 UTC m=+63.374008316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca") pod "console-f9d7485db-cmmr9" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.113995 4760 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114002 4760 secret.go:188] Couldn't get secret openshift-image-registry/installation-pull-secrets: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114023 4760 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114069 4760 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114305 4760 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114348 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs podName:933d294b-c115-4bd3-ade2-1ae37665ae1b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.613985573 +0000 UTC m=+63.374055018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-x4v5w" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114026 4760 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114054 4760 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114071 4760 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114518 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-trusted-ca-bundle podName:3bb3c435-9a35-45f9-af35-cb29f2e6ccb1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614370652 +0000 UTC m=+63.374440057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-trusted-ca-bundle") pod "apiserver-7bbb656c7d-b6725" (UID: "3bb3c435-9a35-45f9-af35-cb29f2e6ccb1") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114107 4760 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114550 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets podName:bf23480c-73e2-4c48-b39c-92ef17211274 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614537046 +0000 UTC m=+63.374606501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114577 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-metrics-certs podName:6f6b13a4-0ce7-4190-a0a3-2741bd546a1d nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614565017 +0000 UTC m=+63.374634492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-metrics-certs") pod "router-default-5444994796-mvf5k" (UID: "6f6b13a4-0ce7-4190-a0a3-2741bd546a1d") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114085 4760 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114601 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-trusted-ca podName:fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614590388 +0000 UTC m=+63.374659823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-trusted-ca") pod "console-operator-58897d9998-2x9wk" (UID: "fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114119 4760 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114682 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-serving-ca podName:3bb3c435-9a35-45f9-af35-cb29f2e6ccb1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614616978 +0000 UTC m=+63.374686433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-serving-ca") pod "apiserver-7bbb656c7d-b6725" (UID: "3bb3c435-9a35-45f9-af35-cb29f2e6ccb1") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114723 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-machine-approver-tls podName:17a2cff3-78a9-45b6-a044-81e7cc36ca0e nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.61471062 +0000 UTC m=+63.374780085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-machine-approver-tls") pod "machine-approver-56656f9798-4pjtd" (UID: "17a2cff3-78a9-45b6-a044-81e7cc36ca0e") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114750 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle podName:3dfa4237-e979-4215-9f2c-20aa6303cae7 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614737761 +0000 UTC m=+63.374807226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle") pod "console-f9d7485db-cmmr9" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114778 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert podName:e0c1456f-b18f-4c71-a1f8-319ec8b012a1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614767042 +0000 UTC m=+63.374836497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert") pod "route-controller-manager-6576b87f9c-rpcr2" (UID: "e0c1456f-b18f-4c71-a1f8-319ec8b012a1") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114807 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-service-ca-bundle podName:6f6b13a4-0ce7-4190-a0a3-2741bd546a1d nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614794932 +0000 UTC m=+63.374864387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-service-ca-bundle") pod "router-default-5444994796-mvf5k" (UID: "6f6b13a4-0ce7-4190-a0a3-2741bd546a1d") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114862 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login podName:933d294b-c115-4bd3-ade2-1ae37665ae1b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614845553 +0000 UTC m=+63.374915008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-x4v5w" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.114918 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-stats-auth podName:6f6b13a4-0ce7-4190-a0a3-2741bd546a1d nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.614874314 +0000 UTC m=+63.374943759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-stats-auth") pod "router-default-5444994796-mvf5k" (UID: "6f6b13a4-0ce7-4190-a0a3-2741bd546a1d") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.115896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c470872-3eb8-4afc-a357-b225ba9b6c94-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.115955 4760 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.115978 4760 projected.go:194] Error preparing data for projected volume kube-api-access-hd89p for pod openshift-machine-api/machine-api-operator-5694c8668f-9wjz8: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.116037 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/32612889-890b-4efb-a777-8ad13a778841-kube-api-access-hd89p podName:32612889-890b-4efb-a777-8ad13a778841 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.616019321 +0000 UTC m=+63.376088666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hd89p" (UniqueName: "kubernetes.io/projected/32612889-890b-4efb-a777-8ad13a778841-kube-api-access-hd89p") pod "machine-api-operator-5694c8668f-9wjz8" (UID: "32612889-890b-4efb-a777-8ad13a778841") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.129841 4760 request.go:700] Waited for 1.0182689s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.139648 4760 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.139688 4760 projected.go:194] Error preparing data for projected volume kube-api-access-qx6k5 for pod openshift-apiserver/apiserver-76f77b778f-wsxtc: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.139790 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf3096b8-b961-454b-9647-ac2b9d3868ca-kube-api-access-qx6k5 podName:cf3096b8-b961-454b-9647-ac2b9d3868ca nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.639746355 +0000 UTC m=+63.399815780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qx6k5" (UniqueName: "kubernetes.io/projected/cf3096b8-b961-454b-9647-ac2b9d3868ca-kube-api-access-qx6k5") pod "apiserver-76f77b778f-wsxtc" (UID: "cf3096b8-b961-454b-9647-ac2b9d3868ca") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.151422 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.151872 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.153265 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.653244176 +0000 UTC m=+63.413313501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.161734 4760 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.161811 4760 projected.go:194] Error preparing data for projected volume kube-api-access-sksfr for pod openshift-controller-manager/controller-manager-879f6c89f-zl9wx: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.161969 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr podName:3c14515f-ee0e-4560-bed2-7ef5160b61ec nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.661921613 +0000 UTC m=+63.421990968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sksfr" (UniqueName: "kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr") pod "controller-manager-879f6c89f-zl9wx" (UID: "3c14515f-ee0e-4560-bed2-7ef5160b61ec") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.171880 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.178789 4760 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.178817 4760 projected.go:194] Error preparing data for projected volume kube-api-access-g2kg7 for pod openshift-authentication-operator/authentication-operator-69f744f599-ffbsv: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.178864 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85882885-093a-4277-80b0-c8db3141030f-kube-api-access-g2kg7 podName:85882885-093a-4277-80b0-c8db3141030f nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.678849675 +0000 UTC m=+63.438918990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g2kg7" (UniqueName: "kubernetes.io/projected/85882885-093a-4277-80b0-c8db3141030f-kube-api-access-g2kg7") pod "authentication-operator-69f744f599-ffbsv" (UID: "85882885-093a-4277-80b0-c8db3141030f") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.212198 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.227144 4760 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.227222 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-config podName:7b7765ee-7703-4b50-a91b-94f8cadaf3e3 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.727204723 +0000 UTC m=+63.487274038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-config") pod "kube-apiserver-operator-766d6c64bb-79bnd" (UID: "7b7765ee-7703-4b50-a91b-94f8cadaf3e3") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.239254 4760 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.239335 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-serving-cert podName:7b7765ee-7703-4b50-a91b-94f8cadaf3e3 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.739316912 +0000 UTC m=+63.499386227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-serving-cert") pod "kube-apiserver-operator-766d6c64bb-79bnd" (UID: "7b7765ee-7703-4b50-a91b-94f8cadaf3e3") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.241952 4760 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.242057 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e844584-fa56-4ff9-b454-bcb89ae547db-serving-cert podName:9e844584-fa56-4ff9-b454-bcb89ae547db nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.742040816 +0000 UTC m=+63.502110131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9e844584-fa56-4ff9-b454-bcb89ae547db-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-5rgkr" (UID: "9e844584-fa56-4ff9-b454-bcb89ae547db") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.244281 4760 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.244402 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e844584-fa56-4ff9-b454-bcb89ae547db-config podName:9e844584-fa56-4ff9-b454-bcb89ae547db nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.744381952 +0000 UTC m=+63.504451267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9e844584-fa56-4ff9-b454-bcb89ae547db-config") pod "kube-storage-version-migrator-operator-b67b599dd-5rgkr" (UID: "9e844584-fa56-4ff9-b454-bcb89ae547db") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.244306 4760 secret.go:188] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.244605 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911b180f-4536-4181-956b-abd6e2c8e0d0-metrics-tls podName:911b180f-4536-4181-956b-abd6e2c8e0d0 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.744596637 +0000 UTC m=+63.504665952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/911b180f-4536-4181-956b-abd6e2c8e0d0-metrics-tls") pod "dns-operator-744455d44c-tqbx6" (UID: "911b180f-4536-4181-956b-abd6e2c8e0d0") : failed to sync secret cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.250590 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.252827 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.252912 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.752903385 +0000 UTC m=+63.512972700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.253249 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.253586 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.7535682 +0000 UTC m=+63.513637575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.266254 4760 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.276941 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.279328 4760 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.291849 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.338706 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.351670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.354873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.355031 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.855011862 +0000 UTC m=+63.615081187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.355309 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.355662 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.855646866 +0000 UTC m=+63.615716191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.382139 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.392883 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.432027 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.437587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64tmq\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-kube-api-access-64tmq\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.451388 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.456954 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.457243 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:40.957220031 +0000 UTC m=+63.717289376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.469150 4760 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.472281 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.493616 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.512234 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.515330 4760 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.532343 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.551282 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.559239 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.559810 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.059784028 +0000 UTC m=+63.819853383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.571041 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.591596 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.627142 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh5dp\" (UniqueName: \"kubernetes.io/projected/073ab0ad-d76e-4ea2-8ecf-eb3436e824bb-kube-api-access-mh5dp\") pod \"machine-config-controller-84d6567774-ctvzp\" (UID: \"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.646241 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dssdc\" (UniqueName: \"kubernetes.io/projected/5493a447-e04c-4f7b-b8ed-6816543ee631-kube-api-access-dssdc\") pod \"packageserver-d55dfcdfc-mq8g4\" (UID: \"5493a447-e04c-4f7b-b8ed-6816543ee631\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.646359 4760 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.661059 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.661233 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.161198847 +0000 UTC m=+63.921268172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.661379 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.661502 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6k5\" (UniqueName: \"kubernetes.io/projected/cf3096b8-b961-454b-9647-ac2b9d3868ca-kube-api-access-qx6k5\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd89p\" (UniqueName: \"kubernetes.io/projected/32612889-890b-4efb-a777-8ad13a778841-kube-api-access-hd89p\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sksfr\" (UniqueName: \"kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-service-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662340 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-stats-auth\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662448 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662476 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-machine-approver-tls\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-metrics-certs\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-trusted-ca\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662668 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-service-ca-bundle\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.662795 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.663250 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/364f3367-d5ab-4345-9c3f-bb7529d76c6f-etcd-service-ca\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.665525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd89p\" (UniqueName: \"kubernetes.io/projected/32612889-890b-4efb-a777-8ad13a778841-kube-api-access-hd89p\") pod \"machine-api-operator-5694c8668f-9wjz8\" (UID: \"32612889-890b-4efb-a777-8ad13a778841\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.665588 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-metrics-certs\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.666278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.666393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.666726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.667590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.667646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6k5\" (UniqueName: \"kubernetes.io/projected/cf3096b8-b961-454b-9647-ac2b9d3868ca-kube-api-access-qx6k5\") pod \"apiserver-76f77b778f-wsxtc\" (UID: \"cf3096b8-b961-454b-9647-ac2b9d3868ca\") " pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.667613 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.16758418 +0000 UTC m=+63.927653525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.667606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjm6t\" (UniqueName: \"kubernetes.io/projected/33f25b04-37cd-4724-9db9-b1816dfb71bc-kube-api-access-xjm6t\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.667793 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-trusted-ca\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.667883 4760 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.668233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.668939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-service-ca-bundle\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.669466 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-stats-auth\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.671176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-machine-approver-tls\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.671595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.672856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.673200 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sksfr\" (UniqueName: \"kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr\") pod \"controller-manager-879f6c89f-zl9wx\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.681106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.683959 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.703986 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.714393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqfp\" (UniqueName: \"kubernetes.io/projected/1521745e-1f44-4a25-8eff-05062c2c24ef-kube-api-access-pvqfp\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.728874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxzvw\" (UniqueName: \"kubernetes.io/projected/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-kube-api-access-lxzvw\") pod \"cni-sysctl-allowlist-ds-h4drj\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.731898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.746350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297cc\" (UniqueName: \"kubernetes.io/projected/a00e2aaf-ac7c-4672-93d5-fc662e271b41-kube-api-access-297cc\") pod \"catalog-operator-68c6474976-sbswf\" (UID: \"a00e2aaf-ac7c-4672-93d5-fc662e271b41\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.752055 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.763984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.764291 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.264271577 +0000 UTC m=+64.024340902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764346 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kg7\" (UniqueName: \"kubernetes.io/projected/85882885-093a-4277-80b0-c8db3141030f-kube-api-access-g2kg7\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e844584-fa56-4ff9-b454-bcb89ae547db-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764541 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764571 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e844584-fa56-4ff9-b454-bcb89ae547db-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764606 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/911b180f-4536-4181-956b-abd6e2c8e0d0-metrics-tls\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.764665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-config\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.765405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-config\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.769584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kg7\" (UniqueName: \"kubernetes.io/projected/85882885-093a-4277-80b0-c8db3141030f-kube-api-access-g2kg7\") pod \"authentication-operator-69f744f599-ffbsv\" (UID: \"85882885-093a-4277-80b0-c8db3141030f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.770276 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.270256489 +0000 UTC m=+64.030325864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.796135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87jg\" (UniqueName: \"kubernetes.io/projected/e90733f1-2aa6-4487-9212-1f21cc77bea4-kube-api-access-f87jg\") pod \"migrator-59844c95c7-d8hds\" (UID: \"e90733f1-2aa6-4487-9212-1f21cc77bea4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.808121 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmx8\" (UniqueName: \"kubernetes.io/projected/4a72f86c-c203-482f-b0e9-beb1f3f77fa0-kube-api-access-xdmx8\") pod \"service-ca-operator-777779d784-qxkdj\" (UID: \"4a72f86c-c203-482f-b0e9-beb1f3f77fa0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.813320 4760 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.813437 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.827679 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfgz6\" (UniqueName: \"kubernetes.io/projected/d0478c05-e6ed-4970-ad35-5ef3904f94c9-kube-api-access-lfgz6\") pod \"ingress-canary-cdjdr\" (UID: \"d0478c05-e6ed-4970-ad35-5ef3904f94c9\") " pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.851140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xssq2\" (UniqueName: \"kubernetes.io/projected/3cacaf44-3b98-45b6-9a51-e41e33c4679d-kube-api-access-xssq2\") pod \"machine-config-server-kkb4n\" (UID: \"3cacaf44-3b98-45b6-9a51-e41e33c4679d\") " pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.867811 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.868165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cdjdr" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.870481 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.368367221 +0000 UTC m=+64.128436576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.871010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.872304 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.372280554 +0000 UTC m=+64.132349909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.880343 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77jp9\" (UniqueName: \"kubernetes.io/projected/9a5980c3-ee00-41b1-9707-f349149a53c4-kube-api-access-77jp9\") pod \"machine-config-operator-74547568cd-6j9tg\" (UID: \"9a5980c3-ee00-41b1-9707-f349149a53c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.896725 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33f25b04-37cd-4724-9db9-b1816dfb71bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r48mz\" (UID: \"33f25b04-37cd-4724-9db9-b1816dfb71bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.910334 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqm5w\" (UniqueName: \"kubernetes.io/projected/62b0425e-20fc-41c8-99c0-cdccdc48d766-kube-api-access-kqm5w\") pod \"collect-profiles-29446905-rnvbt\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.932372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rct26\" (UniqueName: \"kubernetes.io/projected/67b40739-4e24-41b5-9d6a-7ab19939c81c-kube-api-access-rct26\") pod \"package-server-manager-789f6589d5-f9q5z\" (UID: \"67b40739-4e24-41b5-9d6a-7ab19939c81c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.941185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" event={"ID":"abd7bc00-bf5b-48a1-94fe-82dae0bc732e","Type":"ContainerStarted","Data":"dc45048db49673f214ae57a30d034defea7a961a5c7e3662abacc39268573fd0"} Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.946733 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp55s\" (UniqueName: \"kubernetes.io/projected/2f024673-515b-451c-b19c-f542b4cebba9-kube-api-access-hp55s\") pod \"marketplace-operator-79b997595-w927h\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.953787 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4"] Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.959444 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp"] Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.972619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:40 crc kubenswrapper[4760]: W1227 05:45:40.975913 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod073ab0ad_d76e_4ea2_8ecf_eb3436e824bb.slice/crio-ba60054893d53d7a68af77c871ab48f7fcd5f13af41dd64be32501d4e80f24f8 WatchSource:0}: Error finding container ba60054893d53d7a68af77c871ab48f7fcd5f13af41dd64be32501d4e80f24f8: Status 404 returned error can't find the container with id ba60054893d53d7a68af77c871ab48f7fcd5f13af41dd64be32501d4e80f24f8 Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.976648 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.476628674 +0000 UTC m=+64.236697989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.977225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.977587 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.477562366 +0000 UTC m=+64.237631681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:40 crc kubenswrapper[4760]: I1227 05:45:40.986967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffgl\" (UniqueName: \"kubernetes.io/projected/e75b3601-c0ac-4de5-9847-52a8c087a6f9-kube-api-access-gffgl\") pod \"multus-admission-controller-857f4d67dd-phfqm\" (UID: \"e75b3601-c0ac-4de5-9847-52a8c087a6f9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:40 crc kubenswrapper[4760]: E1227 05:45:40.987058 4760 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.011751 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27npl\" (UniqueName: \"kubernetes.io/projected/55aab342-3de1-46cf-8d85-d71345fd1538-kube-api-access-27npl\") pod \"dns-default-p9vf5\" (UID: \"55aab342-3de1-46cf-8d85-d71345fd1538\") " pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.016457 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.016828 4760 projected.go:288] Couldn't get configMap openshift-ingress/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.022786 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.029425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjrl\" (UniqueName: \"kubernetes.io/projected/411e4a40-9030-4d01-aeb4-c5dd6d25b9b2-kube-api-access-7vjrl\") pod \"service-ca-9c57cc56f-6cgcs\" (UID: \"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2\") " pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.040315 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.047114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58rsv\" (UniqueName: \"kubernetes.io/projected/7f4d886c-9f72-4358-8ddb-f820f7181639-kube-api-access-58rsv\") pod \"csi-hostpathplugin-l26l5\" (UID: \"7f4d886c-9f72-4358-8ddb-f820f7181639\") " pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.048521 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.051907 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.055338 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.062305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.063973 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cdjdr"] Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.064863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.077370 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.077996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.078305 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.57828587 +0000 UTC m=+64.338355185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.078606 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.078872 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.578865494 +0000 UTC m=+64.338934809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.084595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b1e43e2-44f4-4c7f-acea-660e06d1ef90-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dv2bd\" (UID: \"8b1e43e2-44f4-4c7f-acea-660e06d1ef90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.086430 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.092486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.098558 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.116494 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdcp\" (UniqueName: \"kubernetes.io/projected/db38a69b-db55-4829-8258-bf3da32477ac-kube-api-access-9vdcp\") pod \"control-plane-machine-set-operator-78cbb6b69f-zfw4n\" (UID: \"db38a69b-db55-4829-8258-bf3da32477ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.120372 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkb4n" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.129844 4760 request.go:700] Waited for 1.888531141s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Dec 27 05:45:41 crc kubenswrapper[4760]: W1227 05:45:41.141429 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0478c05_e6ed_4970_ad35_5ef3904f94c9.slice/crio-c53d5f798ae0d70d6c41cc4cbfc8d710b872c16eb23ca6ce901585b39f5c0f8e WatchSource:0}: Error finding container c53d5f798ae0d70d6c41cc4cbfc8d710b872c16eb23ca6ce901585b39f5c0f8e: Status 404 returned error can't find the container with id c53d5f798ae0d70d6c41cc4cbfc8d710b872c16eb23ca6ce901585b39f5c0f8e Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.148425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1521745e-1f44-4a25-8eff-05062c2c24ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4kjv6\" (UID: \"1521745e-1f44-4a25-8eff-05062c2c24ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.151012 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.152516 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.153172 4760 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.159792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.164915 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e844584-fa56-4ff9-b454-bcb89ae547db-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.180312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.180781 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.680766295 +0000 UTC m=+64.440835610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.200014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385068c5-bdbb-41fe-b1bc-1597b2a461ea-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wzqd4\" (UID: \"385068c5-bdbb-41fe-b1bc-1597b2a461ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.204945 4760 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.207698 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7cdk\" (UniqueName: \"kubernetes.io/projected/bc145080-11cc-455d-b8d4-7baab6859228-kube-api-access-t7cdk\") pod \"olm-operator-6b444d44fb-tlp92\" (UID: \"bc145080-11cc-455d-b8d4-7baab6859228\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.211615 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.216057 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-phfqm"] Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.220659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e844584-fa56-4ff9-b454-bcb89ae547db-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.231712 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.242277 4760 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.245268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/911b180f-4536-4181-956b-abd6e2c8e0d0-metrics-tls\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.265168 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d80f15d-ce8a-4c94-834d-9ec4829bc34f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-q7d2f\" (UID: \"5d80f15d-ce8a-4c94-834d-9ec4829bc34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.266855 4760 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.266881 4760 projected.go:194] Error preparing data for projected volume kube-api-access-clt6p for pod openshift-config-operator/openshift-config-operator-7777fb866f-pdd96: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.266942 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64a3b986-ea3d-4cbb-83ba-44971b220664-kube-api-access-clt6p podName:64a3b986-ea3d-4cbb-83ba-44971b220664 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.766922813 +0000 UTC m=+64.526992128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-clt6p" (UniqueName: "kubernetes.io/projected/64a3b986-ea3d-4cbb-83ba-44971b220664-kube-api-access-clt6p") pod "openshift-config-operator-7777fb866f-pdd96" (UID: "64a3b986-ea3d-4cbb-83ba-44971b220664") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.268124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.270546 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.279827 4760 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.279847 4760 projected.go:194] Error preparing data for projected volume kube-api-access-bp9vm for pod openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.279904 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-kube-api-access-bp9vm podName:3bb3c435-9a35-45f9-af35-cb29f2e6ccb1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.779888161 +0000 UTC m=+64.539957476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bp9vm" (UniqueName: "kubernetes.io/projected/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-kube-api-access-bp9vm") pod "apiserver-7bbb656c7d-b6725" (UID: "3bb3c435-9a35-45f9-af35-cb29f2e6ccb1") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.281154 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.281658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.281959 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.78194522 +0000 UTC m=+64.542014535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.290082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.291340 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.295581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.312031 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.323276 4760 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.332379 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.333733 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6cgcs"] Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.369901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.371129 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.382642 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.382992 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.882977511 +0000 UTC m=+64.643046826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.391267 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.406217 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.411670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.431194 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.451663 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.470230 4760 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.470259 4760 projected.go:194] Error preparing data for projected volume kube-api-access-gmrx2 for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.470309 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2 podName:e0c1456f-b18f-4c71-a1f8-319ec8b012a1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.970292316 +0000 UTC m=+64.730361631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gmrx2" (UniqueName: "kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2") pod "route-controller-manager-6576b87f9c-rpcr2" (UID: "e0c1456f-b18f-4c71-a1f8-319ec8b012a1") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.471623 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.484192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.484491 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:41.984480833 +0000 UTC m=+64.744550148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.492959 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.509733 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z"] Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.515971 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.516393 4760 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.516424 4760 projected.go:194] Error preparing data for projected volume kube-api-access-9p22g for pod openshift-console-operator/console-operator-58897d9998-2x9wk: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.516527 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-kube-api-access-9p22g podName:fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.016492864 +0000 UTC m=+64.776562209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9p22g" (UniqueName: "kubernetes.io/projected/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-kube-api-access-9p22g") pod "console-operator-58897d9998-2x9wk" (UID: "fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.521801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w927h"] Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.523979 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf"] Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.531036 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.552033 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.571934 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.584911 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.585048 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.085029623 +0000 UTC m=+64.845098938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.585705 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.586826 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.086810735 +0000 UTC m=+64.846880050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.591524 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.611807 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.615083 4760 projected.go:194] Error preparing data for projected volume kube-api-access-ws2z9 for pod openshift-console/console-f9d7485db-cmmr9: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.615246 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9 podName:3dfa4237-e979-4215-9f2c-20aa6303cae7 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.115222841 +0000 UTC m=+64.875292166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ws2z9" (UniqueName: "kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9") pod "console-f9d7485db-cmmr9" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.617697 4760 projected.go:194] Error preparing data for projected volume kube-api-access-bwsxh for pod openshift-console/downloads-7954f5f757-g5866: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.617775 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0015afce-dba1-4f1d-a3d3-5f9abe477e43-kube-api-access-bwsxh podName:0015afce-dba1-4f1d-a3d3-5f9abe477e43 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.11776144 +0000 UTC m=+64.877830775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwsxh" (UniqueName: "kubernetes.io/projected/0015afce-dba1-4f1d-a3d3-5f9abe477e43-kube-api-access-bwsxh") pod "downloads-7954f5f757-g5866" (UID: "0015afce-dba1-4f1d-a3d3-5f9abe477e43") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.631652 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.638131 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s4dlz for pod openshift-etcd-operator/etcd-operator-b45778765-5p2sr: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.638192 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/364f3367-d5ab-4345-9c3f-bb7529d76c6f-kube-api-access-s4dlz podName:364f3367-d5ab-4345-9c3f-bb7529d76c6f nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.138177006 +0000 UTC m=+64.898246311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s4dlz" (UniqueName: "kubernetes.io/projected/364f3367-d5ab-4345-9c3f-bb7529d76c6f-kube-api-access-s4dlz") pod "etcd-operator-b45778765-5p2sr" (UID: "364f3367-d5ab-4345-9c3f-bb7529d76c6f") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.651768 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.656739 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b7765ee-7703-4b50-a91b-94f8cadaf3e3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-79bnd\" (UID: \"7b7765ee-7703-4b50-a91b-94f8cadaf3e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.671393 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.672538 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.686200 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.686557 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.186545556 +0000 UTC m=+64.946614871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.691391 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.696822 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.712046 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.715198 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.731870 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.734393 4760 projected.go:194] Error preparing data for projected volume kube-api-access-tchcj for pod openshift-authentication/oauth-openshift-558db77b4-x4v5w: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.734450 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj podName:933d294b-c115-4bd3-ade2-1ae37665ae1b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.234433504 +0000 UTC m=+64.994502819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tchcj" (UniqueName: "kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj") pod "oauth-openshift-558db77b4-x4v5w" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.751853 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.756591 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.772346 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.787529 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp9vm\" (UniqueName: \"kubernetes.io/projected/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-kube-api-access-bp9vm\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.787585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clt6p\" (UniqueName: \"kubernetes.io/projected/64a3b986-ea3d-4cbb-83ba-44971b220664-kube-api-access-clt6p\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.787758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.788273 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.288250312 +0000 UTC m=+65.048319657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.792607 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.798105 4760 projected.go:194] Error preparing data for projected volume kube-api-access-pfx4n for pod openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.798159 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-kube-api-access-pfx4n podName:17a2cff3-78a9-45b6-a044-81e7cc36ca0e nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.298142697 +0000 UTC m=+65.058212022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfx4n" (UniqueName: "kubernetes.io/projected/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-kube-api-access-pfx4n") pod "machine-approver-56656f9798-4pjtd" (UID: "17a2cff3-78a9-45b6-a044-81e7cc36ca0e") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.805246 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp9vm\" (UniqueName: \"kubernetes.io/projected/3bb3c435-9a35-45f9-af35-cb29f2e6ccb1-kube-api-access-bp9vm\") pod \"apiserver-7bbb656c7d-b6725\" (UID: \"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.810635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clt6p\" (UniqueName: \"kubernetes.io/projected/64a3b986-ea3d-4cbb-83ba-44971b220664-kube-api-access-clt6p\") pod \"openshift-config-operator-7777fb866f-pdd96\" (UID: \"64a3b986-ea3d-4cbb-83ba-44971b220664\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.812451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.817171 4760 projected.go:194] Error preparing data for projected volume kube-api-access-mhm94 for pod openshift-ingress/router-default-5444994796-mvf5k: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.817264 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-kube-api-access-mhm94 podName:6f6b13a4-0ce7-4190-a0a3-2741bd546a1d nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.317239191 +0000 UTC m=+65.077308546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mhm94" (UniqueName: "kubernetes.io/projected/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-kube-api-access-mhm94") pod "router-default-5444994796-mvf5k" (UID: "6f6b13a4-0ce7-4190-a0a3-2741bd546a1d") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.832180 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.851191 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.853659 4760 projected.go:194] Error preparing data for projected volume kube-api-access-982gl for pod openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.853808 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-kube-api-access-982gl podName:fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.353772239 +0000 UTC m=+65.113841624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-982gl" (UniqueName: "kubernetes.io/projected/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-kube-api-access-982gl") pod "cluster-samples-operator-665b6dd947-72sjb" (UID: "fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.872034 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.872425 4760 projected.go:194] Error preparing data for projected volume kube-api-access-shhgt for pod openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.872469 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8c470872-3eb8-4afc-a357-b225ba9b6c94-kube-api-access-shhgt podName:8c470872-3eb8-4afc-a357-b225ba9b6c94 nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.372457863 +0000 UTC m=+65.132527298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-shhgt" (UniqueName: "kubernetes.io/projected/8c470872-3eb8-4afc-a357-b225ba9b6c94-kube-api-access-shhgt") pod "openshift-controller-manager-operator-756b6f6bc6-jrhvz" (UID: "8c470872-3eb8-4afc-a357-b225ba9b6c94") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.889404 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.889526 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.389513499 +0000 UTC m=+65.149582814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.889947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.890242 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.390233646 +0000 UTC m=+65.150302961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.891677 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.894038 4760 projected.go:194] Error preparing data for projected volume kube-api-access-w57dj for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz: failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.894141 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-kube-api-access-w57dj podName:acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.394116808 +0000 UTC m=+65.154186163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w57dj" (UniqueName: "kubernetes.io/projected/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-kube-api-access-w57dj") pod "openshift-apiserver-operator-796bbdcf4f-rs9rz" (UID: "acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da") : failed to sync configmap cache: timed out waiting for the condition Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.931979 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.936939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdm4\" (UniqueName: \"kubernetes.io/projected/9e844584-fa56-4ff9-b454-bcb89ae547db-kube-api-access-5qdm4\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgkr\" (UID: \"9e844584-fa56-4ff9-b454-bcb89ae547db\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.952192 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.956639 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.972233 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.985703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmmd\" (UniqueName: \"kubernetes.io/projected/911b180f-4536-4181-956b-abd6e2c8e0d0-kube-api-access-cdmmd\") pod \"dns-operator-744455d44c-tqbx6\" (UID: \"911b180f-4536-4181-956b-abd6e2c8e0d0\") " pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.991289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.991572 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.491540433 +0000 UTC m=+65.251609778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.991772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmrx2\" (UniqueName: \"kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.991901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:41 crc kubenswrapper[4760]: E1227 05:45:41.992693 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.49266402 +0000 UTC m=+65.252733365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:41 crc kubenswrapper[4760]: I1227 05:45:41.997582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmrx2\" (UniqueName: \"kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2\") pod \"route-controller-manager-6576b87f9c-rpcr2\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.034807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.036795 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.072603 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.078549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.092756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.093235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p22g\" (UniqueName: \"kubernetes.io/projected/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-kube-api-access-9p22g\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.093790 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.593757902 +0000 UTC m=+65.353827257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.098823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p22g\" (UniqueName: \"kubernetes.io/projected/fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1-kube-api-access-9p22g\") pod \"console-operator-58897d9998-2x9wk\" (UID: \"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1\") " pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.162229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" event={"ID":"5493a447-e04c-4f7b-b8ed-6816543ee631","Type":"ContainerStarted","Data":"d15e7a50d61640a87511188837202b2fa190532549586fc5886d89923d32fd91"} Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.168940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" event={"ID":"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb","Type":"ContainerStarted","Data":"ba60054893d53d7a68af77c871ab48f7fcd5f13af41dd64be32501d4e80f24f8"} Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.173223 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.177902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.184213 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda00e2aaf_ac7c_4672_93d5_fc662e271b41.slice/crio-080f9fa5170267a259c6c32d9c67fd7070fb28380b057f668ce884c3cc26889e WatchSource:0}: Error finding container 080f9fa5170267a259c6c32d9c67fd7070fb28380b057f668ce884c3cc26889e: Status 404 returned error can't find the container with id 080f9fa5170267a259c6c32d9c67fd7070fb28380b057f668ce884c3cc26889e Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.194884 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.200732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2z9\" (UniqueName: \"kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.200978 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.204972 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.205422 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.705394606 +0000 UTC m=+65.465463931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.208048 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsxh\" (UniqueName: \"kubernetes.io/projected/0015afce-dba1-4f1d-a3d3-5f9abe477e43-kube-api-access-bwsxh\") pod \"downloads-7954f5f757-g5866\" (UID: \"0015afce-dba1-4f1d-a3d3-5f9abe477e43\") " pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.208132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dlz\" (UniqueName: \"kubernetes.io/projected/364f3367-d5ab-4345-9c3f-bb7529d76c6f-kube-api-access-s4dlz\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.211419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2z9\" (UniqueName: \"kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9\") pod \"console-f9d7485db-cmmr9\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.217287 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwsxh\" (UniqueName: \"kubernetes.io/projected/0015afce-dba1-4f1d-a3d3-5f9abe477e43-kube-api-access-bwsxh\") pod \"downloads-7954f5f757-g5866\" (UID: \"0015afce-dba1-4f1d-a3d3-5f9abe477e43\") " pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.223559 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dlz\" (UniqueName: \"kubernetes.io/projected/364f3367-d5ab-4345-9c3f-bb7529d76c6f-kube-api-access-s4dlz\") pod \"etcd-operator-b45778765-5p2sr\" (UID: \"364f3367-d5ab-4345-9c3f-bb7529d76c6f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.265931 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cacaf44_3b98_45b6_9a51_e41e33c4679d.slice/crio-dfe0d1bd574ca65fd6e9d18eee5ebd254fe779d8622fa3fc72f4613b36c4520a WatchSource:0}: Error finding container dfe0d1bd574ca65fd6e9d18eee5ebd254fe779d8622fa3fc72f4613b36c4520a: Status 404 returned error can't find the container with id dfe0d1bd574ca65fd6e9d18eee5ebd254fe779d8622fa3fc72f4613b36c4520a Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.309131 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.309235 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.809219623 +0000 UTC m=+65.569288938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.309499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tchcj\" (UniqueName: \"kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.309531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.309579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfx4n\" (UniqueName: \"kubernetes.io/projected/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-kube-api-access-pfx4n\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.311399 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.811380505 +0000 UTC m=+65.571449870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.315755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tchcj\" (UniqueName: \"kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj\") pod \"oauth-openshift-558db77b4-x4v5w\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.315818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfx4n\" (UniqueName: \"kubernetes.io/projected/17a2cff3-78a9-45b6-a044-81e7cc36ca0e-kube-api-access-pfx4n\") pod \"machine-approver-56656f9798-4pjtd\" (UID: \"17a2cff3-78a9-45b6-a044-81e7cc36ca0e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.331403 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.335165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.347134 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj"] Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.371622 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.372437 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.406453 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds"] Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.410888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.411063 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.911035793 +0000 UTC m=+65.671105108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.411228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982gl\" (UniqueName: \"kubernetes.io/projected/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-kube-api-access-982gl\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.411268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhgt\" (UniqueName: \"kubernetes.io/projected/8c470872-3eb8-4afc-a357-b225ba9b6c94-kube-api-access-shhgt\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.411291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w57dj\" (UniqueName: \"kubernetes.io/projected/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-kube-api-access-w57dj\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.411396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.411415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhm94\" (UniqueName: \"kubernetes.io/projected/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-kube-api-access-mhm94\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.411791 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:42.911777031 +0000 UTC m=+65.671846346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.412045 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.414570 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.416737 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhgt\" (UniqueName: \"kubernetes.io/projected/8c470872-3eb8-4afc-a357-b225ba9b6c94-kube-api-access-shhgt\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrhvz\" (UID: \"8c470872-3eb8-4afc-a357-b225ba9b6c94\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.416888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhm94\" (UniqueName: \"kubernetes.io/projected/6f6b13a4-0ce7-4190-a0a3-2741bd546a1d-kube-api-access-mhm94\") pod \"router-default-5444994796-mvf5k\" (UID: \"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d\") " pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.417459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w57dj\" (UniqueName: \"kubernetes.io/projected/acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da-kube-api-access-w57dj\") pod \"openshift-apiserver-operator-796bbdcf4f-rs9rz\" (UID: \"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.419507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-982gl\" (UniqueName: \"kubernetes.io/projected/fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2-kube-api-access-982gl\") pod \"cluster-samples-operator-665b6dd947-72sjb\" (UID: \"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.432212 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.438561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.451529 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.461375 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.466525 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9wjz8"] Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.491734 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.498271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.508964 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg"] Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.511868 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.512064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.512250 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.012162556 +0000 UTC m=+65.772231871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.512415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.512682 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.012672609 +0000 UTC m=+65.772741924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.514160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.545148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt"] Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.591959 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.594192 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.597073 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz"] Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.612822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.613003 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.112981462 +0000 UTC m=+65.873050777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.613218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.613538 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.113530216 +0000 UTC m=+65.873599531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.632304 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.635318 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.672484 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.678364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.712146 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.712185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.713854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.714524 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.214497375 +0000 UTC m=+65.974566720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.747423 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a72f86c_c203_482f_b0e9_beb1f3f77fa0.slice/crio-cf1f87840645feec63a2dabecad44abbd112798b7ea24934af4f8962c94ca816 WatchSource:0}: Error finding container cf1f87840645feec63a2dabecad44abbd112798b7ea24934af4f8962c94ca816: Status 404 returned error can't find the container with id cf1f87840645feec63a2dabecad44abbd112798b7ea24934af4f8962c94ca816 Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.751669 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode90733f1_2aa6_4487_9212_1f21cc77bea4.slice/crio-1f8e9580e9648fcb38cbe395e22e3757b311d7f58d0b14adb2acd6370fba5d20 WatchSource:0}: Error finding container 1f8e9580e9648fcb38cbe395e22e3757b311d7f58d0b14adb2acd6370fba5d20: Status 404 returned error can't find the container with id 1f8e9580e9648fcb38cbe395e22e3757b311d7f58d0b14adb2acd6370fba5d20 Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.753898 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32612889_890b_4efb_a777_8ad13a778841.slice/crio-adef38db587bde735147e9408e047065faa6a4f5e2d69297435720424322f6af WatchSource:0}: Error finding container adef38db587bde735147e9408e047065faa6a4f5e2d69297435720424322f6af: Status 404 returned error can't find the container with id adef38db587bde735147e9408e047065faa6a4f5e2d69297435720424322f6af Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.756812 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a5980c3_ee00_41b1_9707_f349149a53c4.slice/crio-080de989d6d000c5c8eaf510c063f6bb5e63ffc69a1d41a5167ef4c49c36ce5b WatchSource:0}: Error finding container 080de989d6d000c5c8eaf510c063f6bb5e63ffc69a1d41a5167ef4c49c36ce5b: Status 404 returned error can't find the container with id 080de989d6d000c5c8eaf510c063f6bb5e63ffc69a1d41a5167ef4c49c36ce5b Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.759615 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62b0425e_20fc_41c8_99c0_cdccdc48d766.slice/crio-29e28891056005a86eb00fe1028c6d88df6301647e2cd2c9a29961d3fc722a06 WatchSource:0}: Error finding container 29e28891056005a86eb00fe1028c6d88df6301647e2cd2c9a29961d3fc722a06: Status 404 returned error can't find the container with id 29e28891056005a86eb00fe1028c6d88df6301647e2cd2c9a29961d3fc722a06 Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.759825 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f25b04_37cd_4724_9db9_b1816dfb71bc.slice/crio-a00573256401dae34743e6f9e0386666054a88ece06d394e5c99bd9ad26a5575 WatchSource:0}: Error finding container a00573256401dae34743e6f9e0386666054a88ece06d394e5c99bd9ad26a5575: Status 404 returned error can't find the container with id a00573256401dae34743e6f9e0386666054a88ece06d394e5c99bd9ad26a5575 Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.823853 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.824158 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.324145901 +0000 UTC m=+66.084215216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:42 crc kubenswrapper[4760]: W1227 05:45:42.922584 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod385068c5_bdbb_41fe_b1bc_1597b2a461ea.slice/crio-6ac4ab40d7ce02fe7b9000750242ddf8c6cedfee0011f49b3405a2657cb0c33e WatchSource:0}: Error finding container 6ac4ab40d7ce02fe7b9000750242ddf8c6cedfee0011f49b3405a2657cb0c33e: Status 404 returned error can't find the container with id 6ac4ab40d7ce02fe7b9000750242ddf8c6cedfee0011f49b3405a2657cb0c33e Dec 27 05:45:42 crc kubenswrapper[4760]: I1227 05:45:42.924696 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:42 crc kubenswrapper[4760]: E1227 05:45:42.925043 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.425023298 +0000 UTC m=+66.185092613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.029490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.029854 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.529842719 +0000 UTC m=+66.289912034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.058319 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.104918 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.111630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zl9wx"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.120802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wsxtc"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.130239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.130276 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l26l5"] Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.130625 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.630611954 +0000 UTC m=+66.390681269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: W1227 05:45:43.170985 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e844584_fa56_4ff9_b454_bcb89ae547db.slice/crio-1df06eae71051b0e9eaac2f8cd26d95e0d6ab99348b8a8caa3b032e278182d71 WatchSource:0}: Error finding container 1df06eae71051b0e9eaac2f8cd26d95e0d6ab99348b8a8caa3b032e278182d71: Status 404 returned error can't find the container with id 1df06eae71051b0e9eaac2f8cd26d95e0d6ab99348b8a8caa3b032e278182d71 Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.188150 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" event={"ID":"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2","Type":"ContainerStarted","Data":"5ff6a5dbb8ea6adca1d9e02bb714a85d86d0075c7806809e4a7ec85958a2d49e"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.189705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" event={"ID":"e75b3601-c0ac-4de5-9847-52a8c087a6f9","Type":"ContainerStarted","Data":"12b24883a5001afd85e6a13f7cae8ea0d68da81d052d745b3b8c43705d7ad140"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.191548 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" event={"ID":"a00e2aaf-ac7c-4672-93d5-fc662e271b41","Type":"ContainerStarted","Data":"080f9fa5170267a259c6c32d9c67fd7070fb28380b057f668ce884c3cc26889e"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.193330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" event={"ID":"32612889-890b-4efb-a777-8ad13a778841","Type":"ContainerStarted","Data":"adef38db587bde735147e9408e047065faa6a4f5e2d69297435720424322f6af"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.196243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cdjdr" event={"ID":"d0478c05-e6ed-4970-ad35-5ef3904f94c9","Type":"ContainerStarted","Data":"c53d5f798ae0d70d6c41cc4cbfc8d710b872c16eb23ca6ce901585b39f5c0f8e"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.197450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" event={"ID":"e90733f1-2aa6-4487-9212-1f21cc77bea4","Type":"ContainerStarted","Data":"1f8e9580e9648fcb38cbe395e22e3757b311d7f58d0b14adb2acd6370fba5d20"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.201758 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" event={"ID":"4a72f86c-c203-482f-b0e9-beb1f3f77fa0","Type":"ContainerStarted","Data":"cf1f87840645feec63a2dabecad44abbd112798b7ea24934af4f8962c94ca816"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.204384 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkb4n" event={"ID":"3cacaf44-3b98-45b6-9a51-e41e33c4679d","Type":"ContainerStarted","Data":"dfe0d1bd574ca65fd6e9d18eee5ebd254fe779d8622fa3fc72f4613b36c4520a"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.205323 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" event={"ID":"67b40739-4e24-41b5-9d6a-7ab19939c81c","Type":"ContainerStarted","Data":"76840ec4115d92a3e58d45542aa7e2d2b2b35a5c551c89edacaab52cf11340e9"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.207471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" event={"ID":"33f25b04-37cd-4724-9db9-b1816dfb71bc","Type":"ContainerStarted","Data":"a00573256401dae34743e6f9e0386666054a88ece06d394e5c99bd9ad26a5575"} Dec 27 05:45:43 crc kubenswrapper[4760]: W1227 05:45:43.208806 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf3096b8_b961_454b_9647_ac2b9d3868ca.slice/crio-feb46d9e532becf9e3f5f0a1c010d3830c006f1f7a3cee0d04c60b14c052658e WatchSource:0}: Error finding container feb46d9e532becf9e3f5f0a1c010d3830c006f1f7a3cee0d04c60b14c052658e: Status 404 returned error can't find the container with id feb46d9e532becf9e3f5f0a1c010d3830c006f1f7a3cee0d04c60b14c052658e Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.208973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" event={"ID":"62b0425e-20fc-41c8-99c0-cdccdc48d766","Type":"ContainerStarted","Data":"29e28891056005a86eb00fe1028c6d88df6301647e2cd2c9a29961d3fc722a06"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.209821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" event={"ID":"385068c5-bdbb-41fe-b1bc-1597b2a461ea","Type":"ContainerStarted","Data":"6ac4ab40d7ce02fe7b9000750242ddf8c6cedfee0011f49b3405a2657cb0c33e"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.211261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" event={"ID":"9a5980c3-ee00-41b1-9707-f349149a53c4","Type":"ContainerStarted","Data":"080de989d6d000c5c8eaf510c063f6bb5e63ffc69a1d41a5167ef4c49c36ce5b"} Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.212318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" event={"ID":"2f024673-515b-451c-b19c-f542b4cebba9","Type":"ContainerStarted","Data":"2d5ba775ade53d208f8e421bf245f5f5cda1691f5e52be1cc59381fde4b0f89c"} Dec 27 05:45:43 crc kubenswrapper[4760]: W1227 05:45:43.224834 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d80f15d_ce8a_4c94_834d_9ec4829bc34f.slice/crio-970d76d4b3790d8e30cacd0cbfc8c408f8b74d877f4f24ffd47e459f1e7f7c63 WatchSource:0}: Error finding container 970d76d4b3790d8e30cacd0cbfc8c408f8b74d877f4f24ffd47e459f1e7f7c63: Status 404 returned error can't find the container with id 970d76d4b3790d8e30cacd0cbfc8c408f8b74d877f4f24ffd47e459f1e7f7c63 Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.233782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.234058 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.734046452 +0000 UTC m=+66.494115767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.238975 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.335531 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.335702 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.835669967 +0000 UTC m=+66.595739282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.335796 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.336427 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.836416365 +0000 UTC m=+66.596485730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.436789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.437181 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:43.937167159 +0000 UTC m=+66.697236474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.536688 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.538839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.539226 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.039207965 +0000 UTC m=+66.799277280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.542546 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.548149 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tqbx6"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.552858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.554842 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.559399 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p9vf5"] Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.640437 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.640624 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.140600374 +0000 UTC m=+66.900669689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.640787 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.641194 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.141173978 +0000 UTC m=+66.901243333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.741885 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.742143 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.242085646 +0000 UTC m=+67.002154991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.742366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.742703 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.24268776 +0000 UTC m=+67.002757085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.843613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.843859 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.343824133 +0000 UTC m=+67.103893478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.843921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.844361 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.344343766 +0000 UTC m=+67.104413081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.944711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.944868 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.444851454 +0000 UTC m=+67.204920769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:43 crc kubenswrapper[4760]: I1227 05:45:43.945015 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:43 crc kubenswrapper[4760]: E1227 05:45:43.945340 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.445332216 +0000 UTC m=+67.205401531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.046579 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.046751 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.546721665 +0000 UTC m=+67.306791020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.046970 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.047352 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.54734073 +0000 UTC m=+67.307410055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.148052 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.148351 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.64831672 +0000 UTC m=+67.408386085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.148443 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.148949 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.648926124 +0000 UTC m=+67.408995479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.223793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" event={"ID":"cf3096b8-b961-454b-9647-ac2b9d3868ca","Type":"ContainerStarted","Data":"feb46d9e532becf9e3f5f0a1c010d3830c006f1f7a3cee0d04c60b14c052658e"} Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.225732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" event={"ID":"5d80f15d-ce8a-4c94-834d-9ec4829bc34f","Type":"ContainerStarted","Data":"970d76d4b3790d8e30cacd0cbfc8c408f8b74d877f4f24ffd47e459f1e7f7c63"} Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.227802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" event={"ID":"7f4d886c-9f72-4358-8ddb-f820f7181639","Type":"ContainerStarted","Data":"cce9fb68d58ae1bc6b2001a057fc5c4587c6d7d32a9a4f8f2c67285694f2747e"} Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.229134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" event={"ID":"9e844584-fa56-4ff9-b454-bcb89ae547db","Type":"ContainerStarted","Data":"1df06eae71051b0e9eaac2f8cd26d95e0d6ab99348b8a8caa3b032e278182d71"} Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.230389 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" event={"ID":"3c14515f-ee0e-4560-bed2-7ef5160b61ec","Type":"ContainerStarted","Data":"a0ea7b703b631f121bd8c7d1f0ee9339895906a53e0e8ac1cdfab64222191981"} Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.250617 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.250824 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.750763337 +0000 UTC m=+67.510832692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.251210 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.251764 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.751747941 +0000 UTC m=+67.511817296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.351889 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.352076 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.852052786 +0000 UTC m=+67.612122101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.352229 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.352640 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.85259859 +0000 UTC m=+67.612668005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.452927 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.453084 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.953056999 +0000 UTC m=+67.713126354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.453256 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.453616 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:44.953601263 +0000 UTC m=+67.713670618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.554566 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.554869 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.054832621 +0000 UTC m=+67.814901956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.656242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.656621 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.156601893 +0000 UTC m=+67.916671208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.757947 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.758204 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.258161319 +0000 UTC m=+68.018230664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.758427 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.758873 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.258848006 +0000 UTC m=+68.018917361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.860129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.860276 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.360247388 +0000 UTC m=+68.120316713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.860371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.860695 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.360683679 +0000 UTC m=+68.120753094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.961430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.961643 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.461609049 +0000 UTC m=+68.221678394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:44 crc kubenswrapper[4760]: I1227 05:45:44.961981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:44 crc kubenswrapper[4760]: E1227 05:45:44.962469 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.462449119 +0000 UTC m=+68.222518464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.063260 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.063544 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.563509844 +0000 UTC m=+68.323579189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.063603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.064166 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.5641411 +0000 UTC m=+68.324210445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.165433 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.165781 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.665743757 +0000 UTC m=+68.425813102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.166067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.166496 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.666473935 +0000 UTC m=+68.426543280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.267833 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.268169 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.768131864 +0000 UTC m=+68.528201209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.268264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.268729 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.768714778 +0000 UTC m=+68.528784123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.369902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.370230 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.870193932 +0000 UTC m=+68.630263287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.472256 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.472834 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:45.972810175 +0000 UTC m=+68.732879530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.574031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.574299 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.074266198 +0000 UTC m=+68.834335553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.574601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.575027 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.075012137 +0000 UTC m=+68.835081492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.676347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.676741 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.176715346 +0000 UTC m=+68.936784691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.676841 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.677226 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.177211258 +0000 UTC m=+68.937280573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.777676 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.778346 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.278280982 +0000 UTC m=+69.038350337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.783406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.784076 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.284046164 +0000 UTC m=+69.044115529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.885372 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.885594 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.385566569 +0000 UTC m=+69.145635894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.885762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.886152 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.386141453 +0000 UTC m=+69.146210778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.987194 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.987435 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.487394981 +0000 UTC m=+69.247464336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:45 crc kubenswrapper[4760]: I1227 05:45:45.987773 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:45 crc kubenswrapper[4760]: E1227 05:45:45.988365 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.488342635 +0000 UTC m=+69.248412010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.088754 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.089195 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.589176163 +0000 UTC m=+69.349245478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.190548 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.190968 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.690952745 +0000 UTC m=+69.451022060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.291400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.291690 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.791662351 +0000 UTC m=+69.551731696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.291840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.292455 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.792443599 +0000 UTC m=+69.552512914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.393278 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.393497 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.893449382 +0000 UTC m=+69.653518737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.393641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.394083 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.894064047 +0000 UTC m=+69.654133392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.494888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.495473 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.995403158 +0000 UTC m=+69.755472503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.495643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.496191 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:46.996165707 +0000 UTC m=+69.756235052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.558864 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.571897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.572273 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x4v5w"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.574251 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2x9wk"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.577851 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.577893 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pdd96"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.581131 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffbsv"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.596658 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: W1227 05:45:46.610312 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1521745e_1f44_4a25_8eff_05062c2c24ef.slice/crio-82548f31ae104443ed79b14e30ed3aded904d0ef75af71395e516f5a632015a7 WatchSource:0}: Error finding container 82548f31ae104443ed79b14e30ed3aded904d0ef75af71395e516f5a632015a7: Status 404 returned error can't find the container with id 82548f31ae104443ed79b14e30ed3aded904d0ef75af71395e516f5a632015a7 Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.610466 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.110438994 +0000 UTC m=+69.870508319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: W1227 05:45:46.614224 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb38a69b_db55_4829_8258_bf3da32477ac.slice/crio-fcc1f8b235d24f87f5f47316048b53f3be376156a7133d775dfb4571bc8cc33b WatchSource:0}: Error finding container fcc1f8b235d24f87f5f47316048b53f3be376156a7133d775dfb4571bc8cc33b: Status 404 returned error can't find the container with id fcc1f8b235d24f87f5f47316048b53f3be376156a7133d775dfb4571bc8cc33b Dec 27 05:45:46 crc kubenswrapper[4760]: W1227 05:45:46.616986 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911b180f_4536_4181_956b_abd6e2c8e0d0.slice/crio-25fa5842f7f6d12c998a9ac48bf38440f142d05b0d9c324b6f9e6515f68a1a9b WatchSource:0}: Error finding container 25fa5842f7f6d12c998a9ac48bf38440f142d05b0d9c324b6f9e6515f68a1a9b: Status 404 returned error can't find the container with id 25fa5842f7f6d12c998a9ac48bf38440f142d05b0d9c324b6f9e6515f68a1a9b Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.697837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.698308 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.198289355 +0000 UTC m=+69.958358670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.799074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.799221 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.299194865 +0000 UTC m=+70.059264190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.799344 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.799715 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.299703807 +0000 UTC m=+70.059773202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.860190 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2"] Dec 27 05:45:46 crc kubenswrapper[4760]: I1227 05:45:46.900791 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:46 crc kubenswrapper[4760]: E1227 05:45:46.901140 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.401063099 +0000 UTC m=+70.161132454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.002264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.002679 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.502659085 +0000 UTC m=+70.262728440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.103669 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.104312 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.604287644 +0000 UTC m=+70.364356989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: W1227 05:45:47.126545 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb3c435_9a35_45f9_af35_cb29f2e6ccb1.slice/crio-753ae0632c2782dd6b11ba25cf3cf07d5803afae5bcba20958c0ee3decccc0e4 WatchSource:0}: Error finding container 753ae0632c2782dd6b11ba25cf3cf07d5803afae5bcba20958c0ee3decccc0e4: Status 404 returned error can't find the container with id 753ae0632c2782dd6b11ba25cf3cf07d5803afae5bcba20958c0ee3decccc0e4 Dec 27 05:45:47 crc kubenswrapper[4760]: W1227 05:45:47.131500 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacef0fc7_63a7_46c0_ad2d_2e0f4b6ad8da.slice/crio-ac9b387f84941ff4ccbbdbb95ceb113d7e9b4d25dd36e2395bbae858133d584e WatchSource:0}: Error finding container ac9b387f84941ff4ccbbdbb95ceb113d7e9b4d25dd36e2395bbae858133d584e: Status 404 returned error can't find the container with id ac9b387f84941ff4ccbbdbb95ceb113d7e9b4d25dd36e2395bbae858133d584e Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.148189 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cmmr9"] Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.169078 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5p2sr"] Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.182362 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz"] Dec 27 05:45:47 crc kubenswrapper[4760]: W1227 05:45:47.187361 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17a2cff3_78a9_45b6_a044_81e7cc36ca0e.slice/crio-a838351cc5cba3e893b04e3a2f8e8fbed08419ee179cf55e06af30163f3f1c78 WatchSource:0}: Error finding container a838351cc5cba3e893b04e3a2f8e8fbed08419ee179cf55e06af30163f3f1c78: Status 404 returned error can't find the container with id a838351cc5cba3e893b04e3a2f8e8fbed08419ee179cf55e06af30163f3f1c78 Dec 27 05:45:47 crc kubenswrapper[4760]: W1227 05:45:47.192603 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c470872_3eb8_4afc_a357_b225ba9b6c94.slice/crio-ed7e1fc95c21c983e36c60e866abaeb19e0f9cae2be89be7f00ae9843cd0f7db WatchSource:0}: Error finding container ed7e1fc95c21c983e36c60e866abaeb19e0f9cae2be89be7f00ae9843cd0f7db: Status 404 returned error can't find the container with id ed7e1fc95c21c983e36c60e866abaeb19e0f9cae2be89be7f00ae9843cd0f7db Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.193673 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-g5866"] Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.206238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.206706 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.70669514 +0000 UTC m=+70.466764455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: W1227 05:45:47.216602 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85882885_093a_4277_80b0_c8db3141030f.slice/crio-68c0559156a9761e357a046ad6d76387b592ef2f516171bfd6119c9fe858d267 WatchSource:0}: Error finding container 68c0559156a9761e357a046ad6d76387b592ef2f516171bfd6119c9fe858d267: Status 404 returned error can't find the container with id 68c0559156a9761e357a046ad6d76387b592ef2f516171bfd6119c9fe858d267 Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.249134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" event={"ID":"5493a447-e04c-4f7b-b8ed-6816543ee631","Type":"ContainerStarted","Data":"ca0f93c5f026360bc6ceee36f35e390cc8312db3b8d35907835d0c49af6fd3c5"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.255629 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" event={"ID":"1521745e-1f44-4a25-8eff-05062c2c24ef","Type":"ContainerStarted","Data":"82548f31ae104443ed79b14e30ed3aded904d0ef75af71395e516f5a632015a7"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.258372 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" event={"ID":"85882885-093a-4277-80b0-c8db3141030f","Type":"ContainerStarted","Data":"68c0559156a9761e357a046ad6d76387b592ef2f516171bfd6119c9fe858d267"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.273370 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" event={"ID":"364f3367-d5ab-4345-9c3f-bb7529d76c6f","Type":"ContainerStarted","Data":"d93fb6152a699887851d88d95cde1a641c339a437e4c615cd6d2eb024740f565"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.275335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" event={"ID":"7b7765ee-7703-4b50-a91b-94f8cadaf3e3","Type":"ContainerStarted","Data":"54207413f809b7db796fe77ac58c76b50517a87eb1f63dc8f924e7840fc83624"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.276771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" event={"ID":"8b1e43e2-44f4-4c7f-acea-660e06d1ef90","Type":"ContainerStarted","Data":"d05a67d4928cc49fc1ea12c2ffe01ac68a4fc7afaf175b00aad2960a0803e42a"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.277720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mvf5k" event={"ID":"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d","Type":"ContainerStarted","Data":"fd9412adaaaef8074440f434fc30561ad4c37d36eda9feb8b689bc9c401125e6"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.282274 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" event={"ID":"8c470872-3eb8-4afc-a357-b225ba9b6c94","Type":"ContainerStarted","Data":"ed7e1fc95c21c983e36c60e866abaeb19e0f9cae2be89be7f00ae9843cd0f7db"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.283238 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" event={"ID":"e0c1456f-b18f-4c71-a1f8-319ec8b012a1","Type":"ContainerStarted","Data":"ec667264aad70862b0a789c531bcfdec75cdc6cb6e545d11e5870a4865537522"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.284932 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" event={"ID":"bc145080-11cc-455d-b8d4-7baab6859228","Type":"ContainerStarted","Data":"0657f4f8693b69096d7058de7bdd739185e2442178da8fd5499c24a0c82d7801"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.286451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" event={"ID":"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1","Type":"ContainerStarted","Data":"c5ed281b4bd28b76deb747621091a4c7e5988648d206ca7488c0e668a4ff3160"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.289341 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" event={"ID":"911b180f-4536-4181-956b-abd6e2c8e0d0","Type":"ContainerStarted","Data":"25fa5842f7f6d12c998a9ac48bf38440f142d05b0d9c324b6f9e6515f68a1a9b"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.291912 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cmmr9" event={"ID":"3dfa4237-e979-4215-9f2c-20aa6303cae7","Type":"ContainerStarted","Data":"6c636926c24158d747f9df0547019b5f12ef61ec7a98d711c7c411a726b49c11"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.292945 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" event={"ID":"64a3b986-ea3d-4cbb-83ba-44971b220664","Type":"ContainerStarted","Data":"81412764411edf48089f3e023f4ee987666f26ce6246bd9ab06833549be73732"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.293907 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" event={"ID":"17a2cff3-78a9-45b6-a044-81e7cc36ca0e","Type":"ContainerStarted","Data":"a838351cc5cba3e893b04e3a2f8e8fbed08419ee179cf55e06af30163f3f1c78"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.296001 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" event={"ID":"db38a69b-db55-4829-8258-bf3da32477ac","Type":"ContainerStarted","Data":"fcc1f8b235d24f87f5f47316048b53f3be376156a7133d775dfb4571bc8cc33b"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.297571 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" event={"ID":"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da","Type":"ContainerStarted","Data":"ac9b387f84941ff4ccbbdbb95ceb113d7e9b4d25dd36e2395bbae858133d584e"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.298774 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p9vf5" event={"ID":"55aab342-3de1-46cf-8d85-d71345fd1538","Type":"ContainerStarted","Data":"b7d85e9154849a797e812579aec064b84d7d6518a34d5025946a3a37cbe2f7c1"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.300217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" event={"ID":"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1","Type":"ContainerStarted","Data":"753ae0632c2782dd6b11ba25cf3cf07d5803afae5bcba20958c0ee3decccc0e4"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.301337 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" event={"ID":"933d294b-c115-4bd3-ade2-1ae37665ae1b","Type":"ContainerStarted","Data":"82d199572b4f89cd8b3d61f4c8ea3e1de7904d0318ded3373a86d2b45a60ac8a"} Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.306893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.309734 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.809707123 +0000 UTC m=+70.569776438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.310778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.311368 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.811355763 +0000 UTC m=+70.571425078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.411679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.411912 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:47.911887224 +0000 UTC m=+70.671956539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.513066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.513400 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.013384608 +0000 UTC m=+70.773453923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.615704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.619440 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.119417254 +0000 UTC m=+70.879486569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.720161 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.720466 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.220455958 +0000 UTC m=+70.980525273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.821076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.821795 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.321748767 +0000 UTC m=+71.081818082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.822515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.823700 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.323690595 +0000 UTC m=+71.083759900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.924261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.924386 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.42436998 +0000 UTC m=+71.184439295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:47 crc kubenswrapper[4760]: I1227 05:45:47.924617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:47 crc kubenswrapper[4760]: E1227 05:45:47.924906 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.424897952 +0000 UTC m=+71.184967267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.025579 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.025707 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.52568399 +0000 UTC m=+71.285753325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.025914 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.025981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.026208 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.526197452 +0000 UTC m=+71.286266777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.032673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ad49d7-d1ed-4414-8a10-778d020e1da5-metrics-certs\") pod \"network-metrics-daemon-bxmb9\" (UID: \"d7ad49d7-d1ed-4414-8a10-778d020e1da5\") " pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.038971 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bxmb9" Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.126977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.127234 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.627199136 +0000 UTC m=+71.387268491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.228253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.228520 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.728509155 +0000 UTC m=+71.488578470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.316043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-g5866" event={"ID":"0015afce-dba1-4f1d-a3d3-5f9abe477e43","Type":"ContainerStarted","Data":"e78293c5a01de3943d60822f1732f8f76fdf85febbb086b02ca5bf1ffec37820"} Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.332386 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.332879 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.832861629 +0000 UTC m=+71.592930944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.410899 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bxmb9"] Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.434410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.434690 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:48.934679662 +0000 UTC m=+71.694748977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: W1227 05:45:48.457131 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7ad49d7_d1ed_4414_8a10_778d020e1da5.slice/crio-5cfe189912456bcf65f4e15536fe55dd706ad2e158d9746a51bc34de4698ad4d WatchSource:0}: Error finding container 5cfe189912456bcf65f4e15536fe55dd706ad2e158d9746a51bc34de4698ad4d: Status 404 returned error can't find the container with id 5cfe189912456bcf65f4e15536fe55dd706ad2e158d9746a51bc34de4698ad4d Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.535200 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.535476 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.035462129 +0000 UTC m=+71.795531444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.636239 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.636637 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.136621216 +0000 UTC m=+71.896690531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.738507 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.738942 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.23892641 +0000 UTC m=+71.998995725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.839927 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.840380 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.340364224 +0000 UTC m=+72.100433539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.857575 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf3096b8_b961_454b_9647_ac2b9d3868ca.slice/crio-9f57320a6d367e967e9e87e695d03431184c3adc22c5a3ea652d1b2241a63d6b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a3b986_ea3d_4cbb_83ba_44971b220664.slice/crio-9fbdc292ed57a9a920b0621f0a5ce96107fd81990da76e6b7967438c2ef0ca34.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:45:48 crc kubenswrapper[4760]: I1227 05:45:48.942137 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:48 crc kubenswrapper[4760]: E1227 05:45:48.942590 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.442575386 +0000 UTC m=+72.202644701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.043117 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.043414 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.543403614 +0000 UTC m=+72.303472929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.148363 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.148893 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.648879356 +0000 UTC m=+72.408948671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.250896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.251615 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.751598681 +0000 UTC m=+72.511667996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.337996 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" event={"ID":"7f4d886c-9f72-4358-8ddb-f820f7181639","Type":"ContainerStarted","Data":"b9e0cecc445d65b0cc572eb8bf729f94bc74496273cfe7b86f9545d6b7fdaf66"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.345522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" event={"ID":"85882885-093a-4277-80b0-c8db3141030f","Type":"ContainerStarted","Data":"090cc9317bd246cd8db794df4697249821b99bd7454a4227897dd85ad52efcad"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.351787 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.352547 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.852532682 +0000 UTC m=+72.612601997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.362226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" event={"ID":"17a2cff3-78a9-45b6-a044-81e7cc36ca0e","Type":"ContainerStarted","Data":"be493e19527c95b7359f625c036ef54c1de9ecc958c1ebba3a08afde754032e4"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.366807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" event={"ID":"364f3367-d5ab-4345-9c3f-bb7529d76c6f","Type":"ContainerStarted","Data":"011204817bbfd4e6a99abf0ab04161a2599e7e1b3494b68afab8494eccb45bc5"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.375826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" event={"ID":"911b180f-4536-4181-956b-abd6e2c8e0d0","Type":"ContainerStarted","Data":"1ebe601bbfcb11f818f4e66cdd8e3a31899e6c6d8d1a40214bcc0da10aa9b1ab"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.386948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" event={"ID":"8b1e43e2-44f4-4c7f-acea-660e06d1ef90","Type":"ContainerStarted","Data":"bb942ebe7161017b6901d1fb91bb0ca3e48bb0a806d8acea98abb0e81eddb556"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.405685 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffbsv" podStartSLOduration=48.405661483 podStartE2EDuration="48.405661483s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.385850318 +0000 UTC m=+72.145919633" watchObservedRunningTime="2025-12-27 05:45:49.405661483 +0000 UTC m=+72.165730788" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.419470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" event={"ID":"d7ad49d7-d1ed-4414-8a10-778d020e1da5","Type":"ContainerStarted","Data":"9b59c5b4e04b888c22f34b546fe678db5e5f6617188bec29db7706975bb7c277"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.419531 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" event={"ID":"d7ad49d7-d1ed-4414-8a10-778d020e1da5","Type":"ContainerStarted","Data":"5cfe189912456bcf65f4e15536fe55dd706ad2e158d9746a51bc34de4698ad4d"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.421981 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5p2sr" podStartSLOduration=47.421962691 podStartE2EDuration="47.421962691s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.419716567 +0000 UTC m=+72.179785902" watchObservedRunningTime="2025-12-27 05:45:49.421962691 +0000 UTC m=+72.182031996" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.448287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mvf5k" event={"ID":"6f6b13a4-0ce7-4190-a0a3-2741bd546a1d","Type":"ContainerStarted","Data":"bdf3eb96ad97a011539819540c5f9ce00c2c18ab05683a49a16d8306df0cd8d2"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.453792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.455028 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:49.955016311 +0000 UTC m=+72.715085626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.463361 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" event={"ID":"8c470872-3eb8-4afc-a357-b225ba9b6c94","Type":"ContainerStarted","Data":"dd49b2a1c6226edfb7dd5f44a159e7e60c2ee122fb1b187c463d5a4f9ed066d1"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.466547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cdjdr" event={"ID":"d0478c05-e6ed-4970-ad35-5ef3904f94c9","Type":"ContainerStarted","Data":"c5184ad7176cb6ff369bcbc44d140b5ced81a71996461d40517da56d308adbbb"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.492891 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dv2bd" podStartSLOduration=47.492865617 podStartE2EDuration="47.492865617s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.474493448 +0000 UTC m=+72.234562763" watchObservedRunningTime="2025-12-27 05:45:49.492865617 +0000 UTC m=+72.252934952" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.501034 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.519343 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:49 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:49 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:49 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.519388 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.557045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.558177 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.058162566 +0000 UTC m=+72.818231881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.566534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" event={"ID":"9e844584-fa56-4ff9-b454-bcb89ae547db","Type":"ContainerStarted","Data":"08e8cd1d85de2a0dffca17d2c29b5e3ce12d2e2c1d261936a43f47b8f9f88dd3"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.611369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" event={"ID":"67b40739-4e24-41b5-9d6a-7ab19939c81c","Type":"ContainerStarted","Data":"a1bd5a0252bc50abcde7c4074e6c2df000935b0ff35d6a193bc7b459c883dfb2"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.612119 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.660286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.660607 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.160592574 +0000 UTC m=+72.920661889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.676523 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" event={"ID":"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2","Type":"ContainerStarted","Data":"ef1770b5a3ef62c780f58d59a169deb7787205b4113f19d843c5b79ade1f647f"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.679130 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cmmr9" event={"ID":"3dfa4237-e979-4215-9f2c-20aa6303cae7","Type":"ContainerStarted","Data":"bf3ae9a2e49c7495b74bcddb8a514a18b07674abab54acbf538bdff1fcc49dc4"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.702740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" event={"ID":"1521745e-1f44-4a25-8eff-05062c2c24ef","Type":"ContainerStarted","Data":"dcf903b3a4f98e3dddf62040b767b953095a04ddf60b6124d1fcbe854b938beb"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.706779 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cdjdr" podStartSLOduration=13.706764604 podStartE2EDuration="13.706764604s" podCreationTimestamp="2025-12-27 05:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.556516995 +0000 UTC m=+72.316586310" watchObservedRunningTime="2025-12-27 05:45:49.706764604 +0000 UTC m=+72.466833919" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.730335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" event={"ID":"9a5980c3-ee00-41b1-9707-f349149a53c4","Type":"ContainerStarted","Data":"83733298208c5506026e012de2d634dddedad8700cf28d742581f35378091c95"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.732033 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" event={"ID":"385068c5-bdbb-41fe-b1bc-1597b2a461ea","Type":"ContainerStarted","Data":"ccf0ac546888074a02a90bb8f894b145cbef43c9f45b57b540c2ef16566d86a9"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.733399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" event={"ID":"acef0fc7-63a7-46c0-ad2d-2e0f4b6ad8da","Type":"ContainerStarted","Data":"2caa76322a847a05978452a51eb7f1bd49c2f870e854ea3141f5106cd0f20a98"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.745220 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" event={"ID":"db38a69b-db55-4829-8258-bf3da32477ac","Type":"ContainerStarted","Data":"24ebf5d108b276e0de70e210a50e9fdf8b1bf857df276e3d3ff94e2bfe7b5160"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.746831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" event={"ID":"3c14515f-ee0e-4560-bed2-7ef5160b61ec","Type":"ContainerStarted","Data":"34715e22af8532949887ebed877337c18deb39cdb9fcd87307a27aca6db733c6"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.747413 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.749963 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zl9wx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.749999 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" podUID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.764363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" event={"ID":"7b7765ee-7703-4b50-a91b-94f8cadaf3e3","Type":"ContainerStarted","Data":"08b05edc54307f753919824e59ea1dce9775ae8069e5c6cd2824f5475ea97fa9"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.769074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.770077 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.270062933 +0000 UTC m=+73.030132248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.797074 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrhvz" podStartSLOduration=48.797058894 podStartE2EDuration="48.797058894s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.778802387 +0000 UTC m=+72.538871702" watchObservedRunningTime="2025-12-27 05:45:49.797058894 +0000 UTC m=+72.557128199" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.797616 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mvf5k" podStartSLOduration=47.797612128 podStartE2EDuration="47.797612128s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.720627683 +0000 UTC m=+72.480696998" watchObservedRunningTime="2025-12-27 05:45:49.797612128 +0000 UTC m=+72.557681443" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.833765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" event={"ID":"a00e2aaf-ac7c-4672-93d5-fc662e271b41","Type":"ContainerStarted","Data":"152ea1fda70f0bbd26e0172ca9c59ef5a5fc2e60b5396196ac07d6bc1e4573dc"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.834543 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.843464 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-g5866" event={"ID":"0015afce-dba1-4f1d-a3d3-5f9abe477e43","Type":"ContainerStarted","Data":"a0d8046c793114eac68fd9d071c40438bc12329d041e9cc042dc926e32ed76c2"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.844291 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.855231 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.855273 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.856517 4760 generic.go:334] "Generic (PLEG): container finished" podID="64a3b986-ea3d-4cbb-83ba-44971b220664" containerID="9fbdc292ed57a9a920b0621f0a5ce96107fd81990da76e6b7967438c2ef0ca34" exitCode=0 Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.856578 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" event={"ID":"64a3b986-ea3d-4cbb-83ba-44971b220664","Type":"ContainerDied","Data":"9fbdc292ed57a9a920b0621f0a5ce96107fd81990da76e6b7967438c2ef0ca34"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.863527 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.866290 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" event={"ID":"e75b3601-c0ac-4de5-9847-52a8c087a6f9","Type":"ContainerStarted","Data":"0a47cfebf09aebf365ea3add957e4a027b57fad6699e25eb565c91847cbe0682"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.872856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.876338 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.376325445 +0000 UTC m=+73.136394760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.883075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" event={"ID":"933d294b-c115-4bd3-ade2-1ae37665ae1b","Type":"ContainerStarted","Data":"60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.886645 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.889673 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rs9rz" podStartSLOduration=48.889660571 podStartE2EDuration="48.889660571s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.889304092 +0000 UTC m=+72.649373407" watchObservedRunningTime="2025-12-27 05:45:49.889660571 +0000 UTC m=+72.649729886" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.893304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkb4n" event={"ID":"3cacaf44-3b98-45b6-9a51-e41e33c4679d","Type":"ContainerStarted","Data":"a02b1b2301a55f571b70ba9c1a500823b3974f0a2387da2d64ad3f5bf54d93cb"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.926434 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" event={"ID":"33f25b04-37cd-4724-9db9-b1816dfb71bc","Type":"ContainerStarted","Data":"12f0f599c24ab924d73a71fc90740ffbddf00c0c3aaf52fb6550952cba1a5625"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.948215 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" podStartSLOduration=47.948193154 podStartE2EDuration="47.948193154s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.941632173 +0000 UTC m=+72.701701488" watchObservedRunningTime="2025-12-27 05:45:49.948193154 +0000 UTC m=+72.708262469" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.948748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" event={"ID":"32612889-890b-4efb-a777-8ad13a778841","Type":"ContainerStarted","Data":"3ceb17c32ec77698632f8aa21eaed83b19b4d71521a87139b0b17ceab395932c"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.973851 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:49 crc kubenswrapper[4760]: E1227 05:45:49.975418 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.4753931 +0000 UTC m=+73.235462415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.977385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" event={"ID":"2f024673-515b-451c-b19c-f542b4cebba9","Type":"ContainerStarted","Data":"b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea"} Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.977652 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.988305 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w927h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.988619 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" podUID="2f024673-515b-451c-b19c-f542b4cebba9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Dec 27 05:45:49 crc kubenswrapper[4760]: I1227 05:45:49.990550 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgkr" podStartSLOduration=47.990504329 podStartE2EDuration="47.990504329s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:49.988579022 +0000 UTC m=+72.748648337" watchObservedRunningTime="2025-12-27 05:45:49.990504329 +0000 UTC m=+72.750573654" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.003880 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p9vf5" event={"ID":"55aab342-3de1-46cf-8d85-d71345fd1538","Type":"ContainerStarted","Data":"15d5bf51b64aa79772a23db48027cce44bf81825219b88a35337b586c2e0ddc8"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.027827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" event={"ID":"abd7bc00-bf5b-48a1-94fe-82dae0bc732e","Type":"ContainerStarted","Data":"d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.029203 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.077069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.077357 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.577343845 +0000 UTC m=+73.337413160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.085482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" event={"ID":"fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1","Type":"ContainerStarted","Data":"2d72511022b04a5e34dd9daab8b8a8ad2f2bd48fe427a2d45b299c08cad8ecb0"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.102314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" event={"ID":"e90733f1-2aa6-4487-9212-1f21cc77bea4","Type":"ContainerStarted","Data":"cc305f8e8dc597dd87bd8b37c2216323fe9166c0d4ebf9db2100c10fef76faed"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.104787 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" podStartSLOduration=48.104767927 podStartE2EDuration="48.104767927s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.042920993 +0000 UTC m=+72.802990308" watchObservedRunningTime="2025-12-27 05:45:50.104767927 +0000 UTC m=+72.864837242" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.106642 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wzqd4" podStartSLOduration=49.106634812 podStartE2EDuration="49.106634812s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.105209437 +0000 UTC m=+72.865278752" watchObservedRunningTime="2025-12-27 05:45:50.106634812 +0000 UTC m=+72.866704127" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.178861 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.180078 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.68005885 +0000 UTC m=+73.440128175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.203169 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xqm68"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.203448 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" podStartSLOduration=49.203432072 podStartE2EDuration="49.203432072s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.190958827 +0000 UTC m=+72.951028162" watchObservedRunningTime="2025-12-27 05:45:50.203432072 +0000 UTC m=+72.963501377" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.204533 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.205154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" event={"ID":"4a72f86c-c203-482f-b0e9-beb1f3f77fa0","Type":"ContainerStarted","Data":"22a1b37f535e7db156e2d8379d445fc167522d99b0c09cfdb3bf98e2f13d2c30"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.211435 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.216724 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" event={"ID":"e0c1456f-b18f-4c71-a1f8-319ec8b012a1","Type":"ContainerStarted","Data":"79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.218007 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.240840 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zfw4n" podStartSLOduration=48.240763446 podStartE2EDuration="48.240763446s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.239925996 +0000 UTC m=+72.999995321" watchObservedRunningTime="2025-12-27 05:45:50.240763446 +0000 UTC m=+73.000832761" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.248827 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xqm68"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.260374 4760 generic.go:334] "Generic (PLEG): container finished" podID="3bb3c435-9a35-45f9-af35-cb29f2e6ccb1" containerID="5a9f7c8c2293a37ed17f52828ff11797957d84457e72fa68535d8705eca3210d" exitCode=0 Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.261138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" event={"ID":"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1","Type":"ContainerDied","Data":"5a9f7c8c2293a37ed17f52828ff11797957d84457e72fa68535d8705eca3210d"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.264420 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.283313 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cmmr9" podStartSLOduration=49.283300057 podStartE2EDuration="49.283300057s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.282149879 +0000 UTC m=+73.042219194" watchObservedRunningTime="2025-12-27 05:45:50.283300057 +0000 UTC m=+73.043369372" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.288929 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvfs\" (UniqueName: \"kubernetes.io/projected/277187d7-c71a-4583-8d65-2e713e20557d-kube-api-access-hgvfs\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.289298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.289362 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-catalog-content\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.289564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-utilities\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.290442 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.790425532 +0000 UTC m=+73.550494927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.320340 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" event={"ID":"62b0425e-20fc-41c8-99c0-cdccdc48d766","Type":"ContainerStarted","Data":"de1fc27da355e8a8dcdf4c0c3bb143a78c3e2c22e482e8bb3f5a2eb759ce1fa8"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.338464 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.358929 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-79bnd" podStartSLOduration=48.358915639 podStartE2EDuration="48.358915639s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.357492483 +0000 UTC m=+73.117561798" watchObservedRunningTime="2025-12-27 05:45:50.358915639 +0000 UTC m=+73.118984954" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.385958 4760 generic.go:334] "Generic (PLEG): container finished" podID="cf3096b8-b961-454b-9647-ac2b9d3868ca" containerID="9f57320a6d367e967e9e87e695d03431184c3adc22c5a3ea652d1b2241a63d6b" exitCode=0 Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.386029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" event={"ID":"cf3096b8-b961-454b-9647-ac2b9d3868ca","Type":"ContainerDied","Data":"9f57320a6d367e967e9e87e695d03431184c3adc22c5a3ea652d1b2241a63d6b"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.386794 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lszcc"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.387948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.402563 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.404934 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.406152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-utilities\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.406237 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.906207276 +0000 UTC m=+73.666276591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.407685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-utilities\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.407733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvfs\" (UniqueName: \"kubernetes.io/projected/277187d7-c71a-4583-8d65-2e713e20557d-kube-api-access-hgvfs\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.407812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.407864 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-catalog-content\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.408652 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:50.908638636 +0000 UTC m=+73.668707961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.415500 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-catalog-content\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.419350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" event={"ID":"5d80f15d-ce8a-4c94-834d-9ec4829bc34f","Type":"ContainerStarted","Data":"3477247fa57dbd54ed10cd6d3dd3763c7a861f01c34a2d3ff721b20611dfb74a"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.422383 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lszcc"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.431913 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r48mz" podStartSLOduration=48.431894555 podStartE2EDuration="48.431894555s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.431521535 +0000 UTC m=+73.191590850" watchObservedRunningTime="2025-12-27 05:45:50.431894555 +0000 UTC m=+73.191963870" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.475830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvfs\" (UniqueName: \"kubernetes.io/projected/277187d7-c71a-4583-8d65-2e713e20557d-kube-api-access-hgvfs\") pod \"community-operators-xqm68\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.489483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" event={"ID":"bc145080-11cc-455d-b8d4-7baab6859228","Type":"ContainerStarted","Data":"9068efafb04c685be2468815ba432b47f966b9fa70f539fff9f0325bb9d7bab1"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.490521 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.491107 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" podStartSLOduration=49.491063744 podStartE2EDuration="49.491063744s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.474493838 +0000 UTC m=+73.234563173" watchObservedRunningTime="2025-12-27 05:45:50.491063744 +0000 UTC m=+73.251133079" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.498601 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podStartSLOduration=14.498583577 podStartE2EDuration="14.498583577s" podCreationTimestamp="2025-12-27 05:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.496710152 +0000 UTC m=+73.256779477" watchObservedRunningTime="2025-12-27 05:45:50.498583577 +0000 UTC m=+73.258652892" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.510407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.510768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktl7\" (UniqueName: \"kubernetes.io/projected/4b5d003c-9d11-417b-aafd-19fde5a27981-kube-api-access-hktl7\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.510931 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-utilities\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.511048 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-catalog-content\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.511200 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.011182106 +0000 UTC m=+73.771251421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.523558 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qxkdj" podStartSLOduration=48.523538348 podStartE2EDuration="48.523538348s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.522789761 +0000 UTC m=+73.282859086" watchObservedRunningTime="2025-12-27 05:45:50.523538348 +0000 UTC m=+73.283607663" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.533358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" event={"ID":"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb","Type":"ContainerStarted","Data":"2f20e27a95aa076718a7d30a4e427816a3d7f099bc4945e055d385b4add71741"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.557938 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:50 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:50 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:50 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.557994 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.559937 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" podStartSLOduration=49.559916359 podStartE2EDuration="49.559916359s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.558464633 +0000 UTC m=+73.318533948" watchObservedRunningTime="2025-12-27 05:45:50.559916359 +0000 UTC m=+73.319985674" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.565279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.565682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.575359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" event={"ID":"411e4a40-9030-4d01-aeb4-c5dd6d25b9b2","Type":"ContainerStarted","Data":"fb8fc87aed4ebe26dfafcfe4898612b23fba8ef2ec11481db25ccbfce51366d1"} Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.575453 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.592997 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ch56x"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.594469 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.612353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-catalog-content\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.612389 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.612436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktl7\" (UniqueName: \"kubernetes.io/projected/4b5d003c-9d11-417b-aafd-19fde5a27981-kube-api-access-hktl7\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.612525 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-utilities\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.613586 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-catalog-content\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.613814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-utilities\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.613917 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.113906281 +0000 UTC m=+73.873975586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.622527 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" podStartSLOduration=48.622509731 podStartE2EDuration="48.622509731s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.612779913 +0000 UTC m=+73.372849228" watchObservedRunningTime="2025-12-27 05:45:50.622509731 +0000 UTC m=+73.382579046" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.631455 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ch56x"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.659149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktl7\" (UniqueName: \"kubernetes.io/projected/4b5d003c-9d11-417b-aafd-19fde5a27981-kube-api-access-hktl7\") pod \"certified-operators-lszcc\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.668835 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.719878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.720493 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-utilities\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.720637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxs2g\" (UniqueName: \"kubernetes.io/projected/b811bcc1-2320-4047-93a9-9d79516a3551-kube-api-access-lxs2g\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.720752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-catalog-content\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.721555 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.221539225 +0000 UTC m=+73.981608540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.723857 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-h4drj"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.737246 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" podStartSLOduration=48.73722481 podStartE2EDuration="48.73722481s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.705675057 +0000 UTC m=+73.465744372" watchObservedRunningTime="2025-12-27 05:45:50.73722481 +0000 UTC m=+73.497294125" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.749850 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.840723 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sbswf" podStartSLOduration=48.840706233 podStartE2EDuration="48.840706233s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.780724685 +0000 UTC m=+73.540794000" watchObservedRunningTime="2025-12-27 05:45:50.840706233 +0000 UTC m=+73.600775548" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.841140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.841397 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.341384309 +0000 UTC m=+74.101453624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.849800 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-utilities\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.849925 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxs2g\" (UniqueName: \"kubernetes.io/projected/b811bcc1-2320-4047-93a9-9d79516a3551-kube-api-access-lxs2g\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.850034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-catalog-content\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.850505 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-catalog-content\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.841520 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6gp8b"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.851604 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.852406 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-utilities\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.865255 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" podStartSLOduration=49.865238443 podStartE2EDuration="49.865238443s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.853822554 +0000 UTC m=+73.613891869" watchObservedRunningTime="2025-12-27 05:45:50.865238443 +0000 UTC m=+73.625307758" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.866884 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gp8b"] Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.887179 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x4v5w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.887242 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" podUID="933d294b-c115-4bd3-ade2-1ae37665ae1b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.917058 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxs2g\" (UniqueName: \"kubernetes.io/projected/b811bcc1-2320-4047-93a9-9d79516a3551-kube-api-access-lxs2g\") pod \"community-operators-ch56x\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.918248 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" podStartSLOduration=48.918237691 podStartE2EDuration="48.918237691s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.918127738 +0000 UTC m=+73.678197063" watchObservedRunningTime="2025-12-27 05:45:50.918237691 +0000 UTC m=+73.678307006" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.956153 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.956442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jx74\" (UniqueName: \"kubernetes.io/projected/21439a0b-e71e-4574-87d5-cd7881120c41-kube-api-access-8jx74\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.956568 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-catalog-content\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.956606 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-utilities\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:50 crc kubenswrapper[4760]: E1227 05:45:50.956731 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.456712392 +0000 UTC m=+74.216781707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:50 crc kubenswrapper[4760]: I1227 05:45:50.974667 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" podStartSLOduration=48.974650092 podStartE2EDuration="48.974650092s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:50.972619232 +0000 UTC m=+73.732688537" watchObservedRunningTime="2025-12-27 05:45:50.974650092 +0000 UTC m=+73.734719407" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.005987 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kkb4n" podStartSLOduration=15.005971029 podStartE2EDuration="15.005971029s" podCreationTimestamp="2025-12-27 05:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.004986985 +0000 UTC m=+73.765056300" watchObservedRunningTime="2025-12-27 05:45:51.005971029 +0000 UTC m=+73.766040344" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.060801 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-utilities\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.060851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jx74\" (UniqueName: \"kubernetes.io/projected/21439a0b-e71e-4574-87d5-cd7881120c41-kube-api-access-8jx74\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.060901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.060954 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-catalog-content\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.061594 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-catalog-content\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.061822 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.561812246 +0000 UTC m=+74.321881551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.061837 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-utilities\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.094390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.124783 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" podStartSLOduration=49.124763976 podStartE2EDuration="49.124763976s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.109220266 +0000 UTC m=+73.869289581" watchObservedRunningTime="2025-12-27 05:45:51.124763976 +0000 UTC m=+73.884833291" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.138394 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.162672 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.163181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jx74\" (UniqueName: \"kubernetes.io/projected/21439a0b-e71e-4574-87d5-cd7881120c41-kube-api-access-8jx74\") pod \"certified-operators-6gp8b\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.163218 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.663203308 +0000 UTC m=+74.423272623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.221255 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.270884 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.271282 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.771269883 +0000 UTC m=+74.531339198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.277310 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-q7d2f" podStartSLOduration=49.27729442 podStartE2EDuration="49.27729442s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.27642252 +0000 UTC m=+74.036491835" watchObservedRunningTime="2025-12-27 05:45:51.27729442 +0000 UTC m=+74.037363735" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.278717 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-g5866" podStartSLOduration=50.278712995 podStartE2EDuration="50.278712995s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.179353733 +0000 UTC m=+73.939423038" watchObservedRunningTime="2025-12-27 05:45:51.278712995 +0000 UTC m=+74.038782300" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.372323 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.372696 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.872680736 +0000 UTC m=+74.632750051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.408860 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tlp92" podStartSLOduration=49.408845891 podStartE2EDuration="49.408845891s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.406384481 +0000 UTC m=+74.166453796" watchObservedRunningTime="2025-12-27 05:45:51.408845891 +0000 UTC m=+74.168915206" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.477764 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.478064 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:51.978053396 +0000 UTC m=+74.738122711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.530223 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:51 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:51 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:51 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.530565 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.541917 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xqm68"] Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.580246 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.580554 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.080538394 +0000 UTC m=+74.840607709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: W1227 05:45:51.586508 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277187d7_c71a_4583_8d65_2e713e20557d.slice/crio-9cc5938c55c506f4c114a0d5f42c2d50b2be51dce08d25210e8743a920ecfd3b WatchSource:0}: Error finding container 9cc5938c55c506f4c114a0d5f42c2d50b2be51dce08d25210e8743a920ecfd3b: Status 404 returned error can't find the container with id 9cc5938c55c506f4c114a0d5f42c2d50b2be51dce08d25210e8743a920ecfd3b Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.653241 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6cgcs" podStartSLOduration=49.653223604 podStartE2EDuration="49.653223604s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.652816104 +0000 UTC m=+74.412885419" watchObservedRunningTime="2025-12-27 05:45:51.653223604 +0000 UTC m=+74.413292919" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.681278 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" event={"ID":"64a3b986-ea3d-4cbb-83ba-44971b220664","Type":"ContainerStarted","Data":"5a0f4bb931038a3f5562f8d4fdf9fd12013d37e2148b7fee45542787b2e33dd6"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.681886 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.682005 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.682296 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.182285045 +0000 UTC m=+74.942354360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.697966 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.733178 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" event={"ID":"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2","Type":"ContainerStarted","Data":"3d7a1a9e70d2c8626453c495c703e62ae4f72884818260c0588703eb207426a1"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.733234 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" event={"ID":"fa391cd6-ff1d-4f06-9d9d-f76cacaf67d2","Type":"ContainerStarted","Data":"ee7b7b142be8ccb100a662b1a7ff6a4e604a7a3b8d8d3216858f07ced2bcb3fe"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.749990 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kjv6" event={"ID":"1521745e-1f44-4a25-8eff-05062c2c24ef","Type":"ContainerStarted","Data":"c5b3a3dcdd6390bda93ebb93757cb12e8f777d29b3d1bc28dba06a0882e46694"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.781874 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" podStartSLOduration=49.781841603 podStartE2EDuration="49.781841603s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.693793727 +0000 UTC m=+74.453863042" watchObservedRunningTime="2025-12-27 05:45:51.781841603 +0000 UTC m=+74.541910918" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.782810 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.783927 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.283911463 +0000 UTC m=+75.043980768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.810702 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mq8g4" podStartSLOduration=49.810683918 podStartE2EDuration="49.810683918s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.7813655 +0000 UTC m=+74.541434815" watchObservedRunningTime="2025-12-27 05:45:51.810683918 +0000 UTC m=+74.570753233" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.812137 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bxmb9" event={"ID":"d7ad49d7-d1ed-4414-8a10-778d020e1da5","Type":"ContainerStarted","Data":"6908d3c258a0628edcd0b053e892b8e0a8f591df753348bc99d02ea89c809946"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.887335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" event={"ID":"9a5980c3-ee00-41b1-9707-f349149a53c4","Type":"ContainerStarted","Data":"397a4c91072f70bab1aa7b7034ef224e7f8444f02d4d177c56ee58aa4b1557b6"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.888873 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:51 crc kubenswrapper[4760]: E1227 05:45:51.890421 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.39040859 +0000 UTC m=+75.150477905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.920240 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjb" podStartSLOduration=50.9202252 podStartE2EDuration="50.9202252s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.839515324 +0000 UTC m=+74.599584639" watchObservedRunningTime="2025-12-27 05:45:51.9202252 +0000 UTC m=+74.680294505" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.921764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" event={"ID":"911b180f-4536-4181-956b-abd6e2c8e0d0","Type":"ContainerStarted","Data":"284ea84a150ec3553f6274abff193030da6e605389a4d5188c6ad30ca833cd54"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.968158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9wjz8" event={"ID":"32612889-890b-4efb-a777-8ad13a778841","Type":"ContainerStarted","Data":"2e1d0877cc29792aa39a3137ab5166b58a9d7ced695d782653c70aa2486ac7e0"} Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.997473 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" podStartSLOduration=50.997458381 podStartE2EDuration="50.997458381s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.948911902 +0000 UTC m=+74.708981237" watchObservedRunningTime="2025-12-27 05:45:51.997458381 +0000 UTC m=+74.757527686" Dec 27 05:45:51 crc kubenswrapper[4760]: I1227 05:45:51.999822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.000862 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.500850354 +0000 UTC m=+75.260919669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.019376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" event={"ID":"17a2cff3-78a9-45b6-a044-81e7cc36ca0e","Type":"ContainerStarted","Data":"26edc792d7ea63124392f03249da18fc5e3f9e2f429e9b74798025bd2e89aebb"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.022217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" event={"ID":"cf3096b8-b961-454b-9647-ac2b9d3868ca","Type":"ContainerStarted","Data":"bc2d1d64f95b532e7473a9a840d81febb6cb889c1d28d53b06dfaad0056a34da"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.038862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8hds" event={"ID":"e90733f1-2aa6-4487-9212-1f21cc77bea4","Type":"ContainerStarted","Data":"0b9b2a7a94a2ae7353990c53a46663aa8536ea3678ede7680512fbbe2aa6f5b9"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.047553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-phfqm" event={"ID":"e75b3601-c0ac-4de5-9847-52a8c087a6f9","Type":"ContainerStarted","Data":"a9b04bf0d9242e8e75c29e38b2854e0176d6adc7fb5f3dfcb7a9764142deb0f7"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.051787 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-tqbx6" podStartSLOduration=50.051768031 podStartE2EDuration="50.051768031s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:51.998409464 +0000 UTC m=+74.758478779" watchObservedRunningTime="2025-12-27 05:45:52.051768031 +0000 UTC m=+74.811837346" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.053832 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bxmb9" podStartSLOduration=51.05382485 podStartE2EDuration="51.05382485s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:52.046347048 +0000 UTC m=+74.806416363" watchObservedRunningTime="2025-12-27 05:45:52.05382485 +0000 UTC m=+74.813894165" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.083250 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" event={"ID":"3bb3c435-9a35-45f9-af35-cb29f2e6ccb1","Type":"ContainerStarted","Data":"7b894b264299e9a52bf437e2767b59d9a6b9bde2318855343a0fd25a8691d0d8"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.106688 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.107120 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.607105205 +0000 UTC m=+75.367174520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.118573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ctvzp" event={"ID":"073ab0ad-d76e-4ea2-8ecf-eb3436e824bb","Type":"ContainerStarted","Data":"94f38cc8311897873c2997c75f67cdd1ad5c3794dcfd8f75cbd58306edf22c59"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.138276 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6j9tg" podStartSLOduration=50.138262207 podStartE2EDuration="50.138262207s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:52.083523697 +0000 UTC m=+74.843593012" watchObservedRunningTime="2025-12-27 05:45:52.138262207 +0000 UTC m=+74.898331522" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.138653 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4pjtd" podStartSLOduration=51.138649077 podStartE2EDuration="51.138649077s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:52.135811498 +0000 UTC m=+74.895880813" watchObservedRunningTime="2025-12-27 05:45:52.138649077 +0000 UTC m=+74.898718392" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.150516 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ch56x"] Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.160744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p9vf5" event={"ID":"55aab342-3de1-46cf-8d85-d71345fd1538","Type":"ContainerStarted","Data":"7213605b4560f609b8ff18fabedafb22935e18c131bbb115d6245ff2319e085e"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.161362 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.188533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" event={"ID":"67b40739-4e24-41b5-9d6a-7ab19939c81c","Type":"ContainerStarted","Data":"ea224c92187acefb8e4ce014c2ce4b3af92280f6217cdb105ffecbbb0721189d"} Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.200853 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.200904 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.201813 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.209703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.210463 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.710447875 +0000 UTC m=+75.470517190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.218831 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.219959 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" podStartSLOduration=50.219942238 podStartE2EDuration="50.219942238s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:52.218708467 +0000 UTC m=+74.978777782" watchObservedRunningTime="2025-12-27 05:45:52.219942238 +0000 UTC m=+74.980011553" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.242657 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.310878 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-p9vf5" podStartSLOduration=16.310861343 podStartE2EDuration="16.310861343s" podCreationTimestamp="2025-12-27 05:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:52.270116116 +0000 UTC m=+75.030185451" watchObservedRunningTime="2025-12-27 05:45:52.310861343 +0000 UTC m=+75.070930658" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.312560 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lszcc"] Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.313078 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.314999 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gp8b"] Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.315043 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.815031245 +0000 UTC m=+75.575100550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.415575 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.415806 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.416426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.416430 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.416585 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.416760 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:52.916746725 +0000 UTC m=+75.676816040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.503443 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.511265 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:52 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:52 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:52 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.511310 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.515148 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.515346 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.518016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.518302 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.018292331 +0000 UTC m=+75.778361646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.520320 4760 patch_prober.go:28] interesting pod/console-f9d7485db-cmmr9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.520360 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cmmr9" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.552959 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mr26d"] Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.553901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.555503 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 27 05:45:52 crc kubenswrapper[4760]: W1227 05:45:52.573882 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b5d003c_9d11_417b_aafd_19fde5a27981.slice/crio-108c4f3650febb6f045863a29717bd14f2bcc084a927f3ae1eb077191c9ae272 WatchSource:0}: Error finding container 108c4f3650febb6f045863a29717bd14f2bcc084a927f3ae1eb077191c9ae272: Status 404 returned error can't find the container with id 108c4f3650febb6f045863a29717bd14f2bcc084a927f3ae1eb077191c9ae272 Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.611007 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr26d"] Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.618998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.619176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-catalog-content\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.619240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-utilities\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.619271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmq6\" (UniqueName: \"kubernetes.io/projected/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-kube-api-access-8qmq6\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.619391 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.119377676 +0000 UTC m=+75.879446991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.720980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.721038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-catalog-content\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.721148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-utilities\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.721176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qmq6\" (UniqueName: \"kubernetes.io/projected/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-kube-api-access-8qmq6\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.721644 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.221634299 +0000 UTC m=+75.981703614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.721987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-catalog-content\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.727704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-utilities\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.824919 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qmq6\" (UniqueName: \"kubernetes.io/projected/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-kube-api-access-8qmq6\") pod \"redhat-marketplace-mr26d\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.829166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.829544 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.32952932 +0000 UTC m=+76.089598625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.872517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.932246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:52 crc kubenswrapper[4760]: E1227 05:45:52.932556 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.432543572 +0000 UTC m=+76.192612897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.972911 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p6tss"] Dec 27 05:45:52 crc kubenswrapper[4760]: I1227 05:45:52.973817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.008989 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6tss"] Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.039732 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.039940 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-catalog-content\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.040021 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8w8h\" (UniqueName: \"kubernetes.io/projected/ee14eb8b-710a-4d45-b2a6-008c2f02b154-kube-api-access-p8w8h\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.040100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-utilities\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.040245 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.540229769 +0000 UTC m=+76.300299084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.141583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.141633 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-catalog-content\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.142013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-catalog-content\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.142062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8w8h\" (UniqueName: \"kubernetes.io/projected/ee14eb8b-710a-4d45-b2a6-008c2f02b154-kube-api-access-p8w8h\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.142133 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-utilities\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.142365 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-utilities\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.143125 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.643106607 +0000 UTC m=+76.403175922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.174783 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8w8h\" (UniqueName: \"kubernetes.io/projected/ee14eb8b-710a-4d45-b2a6-008c2f02b154-kube-api-access-p8w8h\") pod \"redhat-marketplace-p6tss\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.203273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerStarted","Data":"c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.203342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerStarted","Data":"4902c247d665c6466d227f324414ee70a93302052a507eece4873f8506af1d0f"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.225722 4760 generic.go:334] "Generic (PLEG): container finished" podID="b811bcc1-2320-4047-93a9-9d79516a3551" containerID="fac26d0244bb6a500b8a27f89dff44c45e2f978acda2e66f10984b359d3f80c7" exitCode=0 Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.225826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch56x" event={"ID":"b811bcc1-2320-4047-93a9-9d79516a3551","Type":"ContainerDied","Data":"fac26d0244bb6a500b8a27f89dff44c45e2f978acda2e66f10984b359d3f80c7"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.225862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch56x" event={"ID":"b811bcc1-2320-4047-93a9-9d79516a3551","Type":"ContainerStarted","Data":"9db2dc094d59931b40eb009127740150e3df65c8a1e5c495f0b722cad8999ed2"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.228077 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.233316 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerStarted","Data":"514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.233350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerStarted","Data":"108c4f3650febb6f045863a29717bd14f2bcc084a927f3ae1eb077191c9ae272"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.242845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.243179 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.743163716 +0000 UTC m=+76.503233031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.266039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" event={"ID":"cf3096b8-b961-454b-9647-ac2b9d3868ca","Type":"ContainerStarted","Data":"90a8753b4304596f09aad424104d83c9958c2dd6780e8e6aaf3751c34760fdf2"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.281895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" event={"ID":"7f4d886c-9f72-4358-8ddb-f820f7181639","Type":"ContainerStarted","Data":"51c27d1cc6048bd0ad38facf8b6a348812ac5b1dbbd7d37f6245c04a74ee36d3"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.300840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.306566 4760 generic.go:334] "Generic (PLEG): container finished" podID="277187d7-c71a-4583-8d65-2e713e20557d" containerID="87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca" exitCode=0 Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.307566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqm68" event={"ID":"277187d7-c71a-4583-8d65-2e713e20557d","Type":"ContainerDied","Data":"87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.307596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqm68" event={"ID":"277187d7-c71a-4583-8d65-2e713e20557d","Type":"ContainerStarted","Data":"9cc5938c55c506f4c114a0d5f42c2d50b2be51dce08d25210e8743a920ecfd3b"} Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.307692 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" gracePeriod=30 Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.315633 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.315675 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.321265 4760 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pdd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.321345 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" podUID="64a3b986-ea3d-4cbb-83ba-44971b220664" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.346217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.347520 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.84750784 +0000 UTC m=+76.607577155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.375895 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" podStartSLOduration=52.375878215 podStartE2EDuration="52.375878215s" podCreationTimestamp="2025-12-27 05:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:53.344519957 +0000 UTC m=+76.104589272" watchObservedRunningTime="2025-12-27 05:45:53.375878215 +0000 UTC m=+76.135947530" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.380670 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7msw2"] Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.432873 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7msw2"] Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.432978 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.446450 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.447248 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.451205 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:53.951182628 +0000 UTC m=+76.711252003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.513258 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:53 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:53 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:53 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.513477 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.559053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.559143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-catalog-content\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.559186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85hv\" (UniqueName: \"kubernetes.io/projected/55c01569-3cd2-4c5f-9039-b61176dac0f3-kube-api-access-b85hv\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.559222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-utilities\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.559560 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.059545691 +0000 UTC m=+76.819615006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.581897 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xz5fn"] Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.582867 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.584726 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xz5fn"] Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.608051 4760 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660017 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85hv\" (UniqueName: \"kubernetes.io/projected/55c01569-3cd2-4c5f-9039-b61176dac0f3-kube-api-access-b85hv\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660532 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-utilities\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-catalog-content\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660617 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-utilities\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlk8v\" (UniqueName: \"kubernetes.io/projected/dad6d9ba-2049-4d43-a786-9ce87644643f-kube-api-access-nlk8v\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.660675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-catalog-content\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.661019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-catalog-content\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.661103 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.161071337 +0000 UTC m=+76.921140642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.661518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-utilities\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.728848 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr26d"] Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.732439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85hv\" (UniqueName: \"kubernetes.io/projected/55c01569-3cd2-4c5f-9039-b61176dac0f3-kube-api-access-b85hv\") pod \"redhat-operators-7msw2\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.766893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-catalog-content\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.766936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-utilities\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.766959 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.766978 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlk8v\" (UniqueName: \"kubernetes.io/projected/dad6d9ba-2049-4d43-a786-9ce87644643f-kube-api-access-nlk8v\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.767462 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.267452161 +0000 UTC m=+77.027521476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.767854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-utilities\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.767874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-catalog-content\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.822014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlk8v\" (UniqueName: \"kubernetes.io/projected/dad6d9ba-2049-4d43-a786-9ce87644643f-kube-api-access-nlk8v\") pod \"redhat-operators-xz5fn\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.870673 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.871007 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.370992796 +0000 UTC m=+77.131062111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.882383 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.970535 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.975002 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:53 crc kubenswrapper[4760]: E1227 05:45:53.975383 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.47536723 +0000 UTC m=+77.235436545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:53 crc kubenswrapper[4760]: I1227 05:45:53.982248 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6tss"] Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.075542 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.075913 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.575899412 +0000 UTC m=+77.335968727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.180323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.180886 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.680875462 +0000 UTC m=+77.440944777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.283451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.283770 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.783755191 +0000 UTC m=+77.543824506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.334060 4760 generic.go:334] "Generic (PLEG): container finished" podID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerID="514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18" exitCode=0 Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.334128 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerDied","Data":"514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18"} Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.351564 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr26d" event={"ID":"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1","Type":"ContainerStarted","Data":"6ed4a2eea571a9c1aba22cf422f2d2816706954a3222e0d643d8ac76162e8702"} Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.365272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" event={"ID":"7f4d886c-9f72-4358-8ddb-f820f7181639","Type":"ContainerStarted","Data":"80d88e8f3447bf1ad033e28fd652b76001ccc6a89a9ff65660b4615112de1a03"} Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.379153 4760 generic.go:334] "Generic (PLEG): container finished" podID="21439a0b-e71e-4574-87d5-cd7881120c41" containerID="c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc" exitCode=0 Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.379231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerDied","Data":"c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc"} Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.389407 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6tss" event={"ID":"ee14eb8b-710a-4d45-b2a6-008c2f02b154","Type":"ContainerStarted","Data":"b704a291018ef2799a9849eda749e81292b8ca1938e62a877b9b3f426c54691f"} Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.389316 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.889304784 +0000 UTC m=+77.649374099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.389033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.470165 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-27T05:45:53.608070199Z","Handler":null,"Name":""} Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.495006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.496957 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:54.996942319 +0000 UTC m=+77.757011624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.509221 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:54 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:54 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:54 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.509286 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.563131 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xz5fn"] Dec 27 05:45:54 crc kubenswrapper[4760]: W1227 05:45:54.574714 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddad6d9ba_2049_4d43_a786_9ce87644643f.slice/crio-8a758f6b925b34344593fe40e5afae6b2a90ac28321ed22a7c8a373d4ef5f007 WatchSource:0}: Error finding container 8a758f6b925b34344593fe40e5afae6b2a90ac28321ed22a7c8a373d4ef5f007: Status 404 returned error can't find the container with id 8a758f6b925b34344593fe40e5afae6b2a90ac28321ed22a7c8a373d4ef5f007 Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.597397 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.597668 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.097656655 +0000 UTC m=+77.857725970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.674935 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7msw2"] Dec 27 05:45:54 crc kubenswrapper[4760]: W1227 05:45:54.681725 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c01569_3cd2_4c5f_9039_b61176dac0f3.slice/crio-fed0264430a07eae7a46ee4d6d8d7f911aa5d60a2ad67a0072f471aceabf782f WatchSource:0}: Error finding container fed0264430a07eae7a46ee4d6d8d7f911aa5d60a2ad67a0072f471aceabf782f: Status 404 returned error can't find the container with id fed0264430a07eae7a46ee4d6d8d7f911aa5d60a2ad67a0072f471aceabf782f Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.689876 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.690543 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.693956 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.694887 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.697996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.698194 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.198174726 +0000 UTC m=+77.958244041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.698362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.698678 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.198670578 +0000 UTC m=+77.958739893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.701257 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.799444 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.799683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.799720 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.799760 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.299736781 +0000 UTC m=+78.059806086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.799921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.800243 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.300233694 +0000 UTC m=+78.060303009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.901109 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.901319 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.401287928 +0000 UTC m=+78.161357243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.901387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.901483 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.901561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.901648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:54 crc kubenswrapper[4760]: E1227 05:45:54.901879 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.401862322 +0000 UTC m=+78.161931637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:54 crc kubenswrapper[4760]: I1227 05:45:54.923686 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.002359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.002516 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.502494235 +0000 UTC m=+78.262563560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.002646 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.002986 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.502975797 +0000 UTC m=+78.263045112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.021760 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.082118 4760 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pdd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.082185 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" podUID="64a3b986-ea3d-4cbb-83ba-44971b220664" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.082186 4760 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pdd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.082247 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" podUID="64a3b986-ea3d-4cbb-83ba-44971b220664" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.104009 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.104516 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.604494422 +0000 UTC m=+78.364563737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.205192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.205596 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.705576077 +0000 UTC m=+78.465645402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.306534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.306675 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.806657061 +0000 UTC m=+78.566726386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.306953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.307400 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.807390149 +0000 UTC m=+78.567459464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.408953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.409075 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.909051018 +0000 UTC m=+78.669120333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.410383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.410908 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:55.910883863 +0000 UTC m=+78.670953208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.478707 4760 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" failed. No retries permitted until 2025-12-27 05:45:55.978663112 +0000 UTC m=+78.738732477 (durationBeforeRetry 500ms). Error: RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock, err: rpc error: code = DeadlineExceeded desc = context deadline exceeded Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.502441 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:55 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:55 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:55 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.502505 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.511883 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.512066 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.012026099 +0000 UTC m=+78.772095464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.512930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.513478 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.013453324 +0000 UTC m=+78.773522669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.605397 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pdd96" Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.607215 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerStarted","Data":"fed0264430a07eae7a46ee4d6d8d7f911aa5d60a2ad67a0072f471aceabf782f"} Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.614135 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.614604 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.114590829 +0000 UTC m=+78.874660144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.615319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerStarted","Data":"8a758f6b925b34344593fe40e5afae6b2a90ac28321ed22a7c8a373d4ef5f007"} Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.716854 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.717191 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.217179341 +0000 UTC m=+78.977248656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.818326 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.818504 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.318479141 +0000 UTC m=+79.078548456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.818648 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.818920 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.318908631 +0000 UTC m=+79.078977946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.861034 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 27 05:45:55 crc kubenswrapper[4760]: W1227 05:45:55.886327 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod08f127fa_1de2_4a5e_bb18_eecdbc5a01eb.slice/crio-ba2ab04266aad0751e1e2dc0146e1d4856c13686ae3dc82e5b1c66726345aeba WatchSource:0}: Error finding container ba2ab04266aad0751e1e2dc0146e1d4856c13686ae3dc82e5b1c66726345aeba: Status 404 returned error can't find the container with id ba2ab04266aad0751e1e2dc0146e1d4856c13686ae3dc82e5b1c66726345aeba Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.919953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.920135 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.420111278 +0000 UTC m=+79.180180593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:55 crc kubenswrapper[4760]: I1227 05:45:55.920721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:55 crc kubenswrapper[4760]: E1227 05:45:55.921058 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.421043862 +0000 UTC m=+79.181113177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.024197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.024651 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.524626777 +0000 UTC m=+79.284696092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.024853 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.025592 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.52555063 +0000 UTC m=+79.285619945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.126652 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.126779 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.626755188 +0000 UTC m=+79.386824503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.127160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.127314 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.627305381 +0000 UTC m=+79.387374696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.227997 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.228407 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.728393696 +0000 UTC m=+79.488463011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.329827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.330292 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.830269419 +0000 UTC m=+79.590338774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.431470 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.431668 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.931643672 +0000 UTC m=+79.691712987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.431775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: E1227 05:45:56.432119 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-27 05:45:56.932081612 +0000 UTC m=+79.692150917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzs8s" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.471580 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-27T05:45:53.608070199Z","Handler":null,"Name":""} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.473419 4760 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.473450 4760 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.501403 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:56 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:56 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:56 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.501459 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.533044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.538448 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.620816 4760 generic.go:334] "Generic (PLEG): container finished" podID="62b0425e-20fc-41c8-99c0-cdccdc48d766" containerID="de1fc27da355e8a8dcdf4c0c3bb143a78c3e2c22e482e8bb3f5a2eb759ce1fa8" exitCode=0 Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.620882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" event={"ID":"62b0425e-20fc-41c8-99c0-cdccdc48d766","Type":"ContainerDied","Data":"de1fc27da355e8a8dcdf4c0c3bb143a78c3e2c22e482e8bb3f5a2eb759ce1fa8"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.622850 4760 generic.go:334] "Generic (PLEG): container finished" podID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerID="381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931" exitCode=0 Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.622885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr26d" event={"ID":"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1","Type":"ContainerDied","Data":"381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.624610 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb","Type":"ContainerStarted","Data":"e18c4680ee3613c5711127bac834867e8e934ec207c10027fbf72835a0c57125"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.624655 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb","Type":"ContainerStarted","Data":"ba2ab04266aad0751e1e2dc0146e1d4856c13686ae3dc82e5b1c66726345aeba"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.626138 4760 generic.go:334] "Generic (PLEG): container finished" podID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerID="a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5" exitCode=0 Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.626192 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerDied","Data":"a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.628989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" event={"ID":"7f4d886c-9f72-4358-8ddb-f820f7181639","Type":"ContainerStarted","Data":"5bf30697708b2d7591c1a2fa98ce84b5bc478969fcd15da9f55666d4ab8eeb90"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.634331 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.646192 4760 generic.go:334] "Generic (PLEG): container finished" podID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerID="51c78926e7a0bb86ddb65865bdc10b112414d3468e8223de99fe18d93594072e" exitCode=0 Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.646529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerDied","Data":"51c78926e7a0bb86ddb65865bdc10b112414d3468e8223de99fe18d93594072e"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.646313 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.646588 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.664659 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerID="07487603081c22e970427ddfc8a17c90d4e4003215de998185753048598b9e1e" exitCode=0 Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.664700 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6tss" event={"ID":"ee14eb8b-710a-4d45-b2a6-008c2f02b154","Type":"ContainerDied","Data":"07487603081c22e970427ddfc8a17c90d4e4003215de998185753048598b9e1e"} Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.671945 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.690854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzs8s\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.694422 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l26l5" podStartSLOduration=20.694406104 podStartE2EDuration="20.694406104s" podCreationTimestamp="2025-12-27 05:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:56.692549228 +0000 UTC m=+79.452618553" watchObservedRunningTime="2025-12-27 05:45:56.694406104 +0000 UTC m=+79.454475419" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.715819 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.715882 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.740537 4760 patch_prober.go:28] interesting pod/apiserver-76f77b778f-wsxtc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]log ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]etcd ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/generic-apiserver-start-informers ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/max-in-flight-filter ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 27 05:45:56 crc kubenswrapper[4760]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 27 05:45:56 crc kubenswrapper[4760]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/project.openshift.io-projectcache ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-startinformers ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 27 05:45:56 crc kubenswrapper[4760]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 27 05:45:56 crc kubenswrapper[4760]: livez check failed Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.740609 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" podUID="cf3096b8-b961-454b-9647-ac2b9d3868ca" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.827494 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 27 05:45:56 crc kubenswrapper[4760]: I1227 05:45:56.830425 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.041566 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.042815 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.060262 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.162538 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzs8s"] Dec 27 05:45:57 crc kubenswrapper[4760]: W1227 05:45:57.172991 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf23480c_73e2_4c48_b39c_92ef17211274.slice/crio-821737ab9bf9f94da76425dba1bf706fcbe21f5048d02a595d7083a300486442 WatchSource:0}: Error finding container 821737ab9bf9f94da76425dba1bf706fcbe21f5048d02a595d7083a300486442: Status 404 returned error can't find the container with id 821737ab9bf9f94da76425dba1bf706fcbe21f5048d02a595d7083a300486442 Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.501489 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:57 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:57 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:57 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.501542 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.511353 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.675303 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" event={"ID":"bf23480c-73e2-4c48-b39c-92ef17211274","Type":"ContainerStarted","Data":"821737ab9bf9f94da76425dba1bf706fcbe21f5048d02a595d7083a300486442"} Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.693183 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b6725" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.697690 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.697669894 podStartE2EDuration="3.697669894s" podCreationTimestamp="2025-12-27 05:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:57.697135962 +0000 UTC m=+80.457205277" watchObservedRunningTime="2025-12-27 05:45:57.697669894 +0000 UTC m=+80.457739209" Dec 27 05:45:57 crc kubenswrapper[4760]: I1227 05:45:57.984428 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.065685 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62b0425e-20fc-41c8-99c0-cdccdc48d766-config-volume\") pod \"62b0425e-20fc-41c8-99c0-cdccdc48d766\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.066050 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqm5w\" (UniqueName: \"kubernetes.io/projected/62b0425e-20fc-41c8-99c0-cdccdc48d766-kube-api-access-kqm5w\") pod \"62b0425e-20fc-41c8-99c0-cdccdc48d766\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.066132 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62b0425e-20fc-41c8-99c0-cdccdc48d766-secret-volume\") pod \"62b0425e-20fc-41c8-99c0-cdccdc48d766\" (UID: \"62b0425e-20fc-41c8-99c0-cdccdc48d766\") " Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.067036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b0425e-20fc-41c8-99c0-cdccdc48d766-config-volume" (OuterVolumeSpecName: "config-volume") pod "62b0425e-20fc-41c8-99c0-cdccdc48d766" (UID: "62b0425e-20fc-41c8-99c0-cdccdc48d766"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.082252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b0425e-20fc-41c8-99c0-cdccdc48d766-kube-api-access-kqm5w" (OuterVolumeSpecName: "kube-api-access-kqm5w") pod "62b0425e-20fc-41c8-99c0-cdccdc48d766" (UID: "62b0425e-20fc-41c8-99c0-cdccdc48d766"). InnerVolumeSpecName "kube-api-access-kqm5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.085081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b0425e-20fc-41c8-99c0-cdccdc48d766-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62b0425e-20fc-41c8-99c0-cdccdc48d766" (UID: "62b0425e-20fc-41c8-99c0-cdccdc48d766"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.167880 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62b0425e-20fc-41c8-99c0-cdccdc48d766-config-volume\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.167920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqm5w\" (UniqueName: \"kubernetes.io/projected/62b0425e-20fc-41c8-99c0-cdccdc48d766-kube-api-access-kqm5w\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.167936 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62b0425e-20fc-41c8-99c0-cdccdc48d766-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.501453 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:58 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:58 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:58 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.501524 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.510277 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.689660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" event={"ID":"62b0425e-20fc-41c8-99c0-cdccdc48d766","Type":"ContainerDied","Data":"29e28891056005a86eb00fe1028c6d88df6301647e2cd2c9a29961d3fc722a06"} Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.689705 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e28891056005a86eb00fe1028c6d88df6301647e2cd2c9a29961d3fc722a06" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.690487 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446905-rnvbt" Dec 27 05:45:58 crc kubenswrapper[4760]: I1227 05:45:58.713155 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.713138563 podStartE2EDuration="713.138563ms" podCreationTimestamp="2025-12-27 05:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:45:58.711686368 +0000 UTC m=+81.471755683" watchObservedRunningTime="2025-12-27 05:45:58.713138563 +0000 UTC m=+81.473207878" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.088345 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 27 05:45:59 crc kubenswrapper[4760]: E1227 05:45:59.088638 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b0425e-20fc-41c8-99c0-cdccdc48d766" containerName="collect-profiles" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.088653 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b0425e-20fc-41c8-99c0-cdccdc48d766" containerName="collect-profiles" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.088771 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b0425e-20fc-41c8-99c0-cdccdc48d766" containerName="collect-profiles" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.089190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.090608 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.091450 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.092972 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.162684 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p9vf5" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.191370 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa3415a3-bb89-47fd-944c-879aaf46b72c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.191445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa3415a3-bb89-47fd-944c-879aaf46b72c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.292186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa3415a3-bb89-47fd-944c-879aaf46b72c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.292332 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa3415a3-bb89-47fd-944c-879aaf46b72c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.292970 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa3415a3-bb89-47fd-944c-879aaf46b72c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.313213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa3415a3-bb89-47fd-944c-879aaf46b72c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.404847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.513296 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:45:59 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:45:59 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:45:59 crc kubenswrapper[4760]: healthz check failed Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.513343 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.612448 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.711385 4760 generic.go:334] "Generic (PLEG): container finished" podID="08f127fa-1de2-4a5e-bb18-eecdbc5a01eb" containerID="e18c4680ee3613c5711127bac834867e8e934ec207c10027fbf72835a0c57125" exitCode=0 Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.711583 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb","Type":"ContainerDied","Data":"e18c4680ee3613c5711127bac834867e8e934ec207c10027fbf72835a0c57125"} Dec 27 05:45:59 crc kubenswrapper[4760]: I1227 05:45:59.716499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa3415a3-bb89-47fd-944c-879aaf46b72c","Type":"ContainerStarted","Data":"af88020ca82796cf307cc48d9e2e1c83daf878baa7aedbd48bf1fa9d2085c153"} Dec 27 05:46:00 crc kubenswrapper[4760]: I1227 05:46:00.506727 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:46:00 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:46:00 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:46:00 crc kubenswrapper[4760]: healthz check failed Dec 27 05:46:00 crc kubenswrapper[4760]: I1227 05:46:00.507053 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:46:00 crc kubenswrapper[4760]: I1227 05:46:00.728332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" event={"ID":"bf23480c-73e2-4c48-b39c-92ef17211274","Type":"ContainerStarted","Data":"6bce21252d21524a8d0cf18bb5333a63c11e1305deb2ad9cec3113cdc88c1269"} Dec 27 05:46:00 crc kubenswrapper[4760]: I1227 05:46:00.729989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa3415a3-bb89-47fd-944c-879aaf46b72c","Type":"ContainerStarted","Data":"bbd9d60696a74896f6b588106ef7c8a1055e9625407773b423e77180d89e6740"} Dec 27 05:46:00 crc kubenswrapper[4760]: E1227 05:46:00.820111 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:00 crc kubenswrapper[4760]: E1227 05:46:00.831906 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:00 crc kubenswrapper[4760]: E1227 05:46:00.834069 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:00 crc kubenswrapper[4760]: E1227 05:46:00.834121 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.060023 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.121641 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kubelet-dir\") pod \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.121703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kube-api-access\") pod \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\" (UID: \"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb\") " Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.121848 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "08f127fa-1de2-4a5e-bb18-eecdbc5a01eb" (UID: "08f127fa-1de2-4a5e-bb18-eecdbc5a01eb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.121992 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.130300 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "08f127fa-1de2-4a5e-bb18-eecdbc5a01eb" (UID: "08f127fa-1de2-4a5e-bb18-eecdbc5a01eb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.224028 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f127fa-1de2-4a5e-bb18-eecdbc5a01eb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.501395 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:46:01 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:46:01 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:46:01 crc kubenswrapper[4760]: healthz check failed Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.501458 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.720349 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.725290 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wsxtc" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.741787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"08f127fa-1de2-4a5e-bb18-eecdbc5a01eb","Type":"ContainerDied","Data":"ba2ab04266aad0751e1e2dc0146e1d4856c13686ae3dc82e5b1c66726345aeba"} Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.743690 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba2ab04266aad0751e1e2dc0146e1d4856c13686ae3dc82e5b1c66726345aeba" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.741924 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.743833 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.841072 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" podStartSLOduration=59.841057796 podStartE2EDuration="59.841057796s" podCreationTimestamp="2025-12-27 05:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:46:01.826893179 +0000 UTC m=+84.586962494" watchObservedRunningTime="2025-12-27 05:46:01.841057796 +0000 UTC m=+84.601127111" Dec 27 05:46:01 crc kubenswrapper[4760]: I1227 05:46:01.844197 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.843073876 podStartE2EDuration="2.843073876s" podCreationTimestamp="2025-12-27 05:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:46:01.840071042 +0000 UTC m=+84.600140357" watchObservedRunningTime="2025-12-27 05:46:01.843073876 +0000 UTC m=+84.603143191" Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.415939 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.415981 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-g5866 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.416002 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.416042 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-g5866" podUID="0015afce-dba1-4f1d-a3d3-5f9abe477e43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.504410 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:46:02 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:46:02 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:46:02 crc kubenswrapper[4760]: healthz check failed Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.504514 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.517183 4760 patch_prober.go:28] interesting pod/console-f9d7485db-cmmr9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.517254 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cmmr9" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.760303 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa3415a3-bb89-47fd-944c-879aaf46b72c" containerID="bbd9d60696a74896f6b588106ef7c8a1055e9625407773b423e77180d89e6740" exitCode=0 Dec 27 05:46:02 crc kubenswrapper[4760]: I1227 05:46:02.760375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa3415a3-bb89-47fd-944c-879aaf46b72c","Type":"ContainerDied","Data":"bbd9d60696a74896f6b588106ef7c8a1055e9625407773b423e77180d89e6740"} Dec 27 05:46:03 crc kubenswrapper[4760]: I1227 05:46:03.501770 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:46:03 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:46:03 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:46:03 crc kubenswrapper[4760]: healthz check failed Dec 27 05:46:03 crc kubenswrapper[4760]: I1227 05:46:03.501833 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:46:04 crc kubenswrapper[4760]: I1227 05:46:04.502962 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:46:04 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:46:04 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:46:04 crc kubenswrapper[4760]: healthz check failed Dec 27 05:46:04 crc kubenswrapper[4760]: I1227 05:46:04.503222 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.441886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.441937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.441971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.442023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.447689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.447697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.452584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.454490 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.473207 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.501220 4760 patch_prober.go:28] interesting pod/router-default-5444994796-mvf5k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 27 05:46:05 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Dec 27 05:46:05 crc kubenswrapper[4760]: [+]process-running ok Dec 27 05:46:05 crc kubenswrapper[4760]: healthz check failed Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.501285 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mvf5k" podUID="6f6b13a4-0ce7-4190-a0a3-2741bd546a1d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.571010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:46:05 crc kubenswrapper[4760]: I1227 05:46:05.788022 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 27 05:46:07 crc kubenswrapper[4760]: I1227 05:46:07.408413 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:46:07 crc kubenswrapper[4760]: I1227 05:46:07.411083 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mvf5k" Dec 27 05:46:10 crc kubenswrapper[4760]: E1227 05:46:10.815451 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:10 crc kubenswrapper[4760]: E1227 05:46:10.817107 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:10 crc kubenswrapper[4760]: E1227 05:46:10.818340 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:10 crc kubenswrapper[4760]: E1227 05:46:10.818403 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:46:12 crc kubenswrapper[4760]: I1227 05:46:12.421221 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-g5866" Dec 27 05:46:12 crc kubenswrapper[4760]: I1227 05:46:12.585962 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:46:12 crc kubenswrapper[4760]: I1227 05:46:12.590595 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:46:16 crc kubenswrapper[4760]: I1227 05:46:16.834695 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:46:20 crc kubenswrapper[4760]: E1227 05:46:20.818232 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:20 crc kubenswrapper[4760]: E1227 05:46:20.821142 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:20 crc kubenswrapper[4760]: E1227 05:46:20.822823 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:20 crc kubenswrapper[4760]: E1227 05:46:20.822983 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.051602 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.816848 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.861304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"aa3415a3-bb89-47fd-944c-879aaf46b72c","Type":"ContainerDied","Data":"af88020ca82796cf307cc48d9e2e1c83daf878baa7aedbd48bf1fa9d2085c153"} Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.861379 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.861386 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af88020ca82796cf307cc48d9e2e1c83daf878baa7aedbd48bf1fa9d2085c153" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.877219 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa3415a3-bb89-47fd-944c-879aaf46b72c-kubelet-dir\") pod \"aa3415a3-bb89-47fd-944c-879aaf46b72c\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.877269 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa3415a3-bb89-47fd-944c-879aaf46b72c-kube-api-access\") pod \"aa3415a3-bb89-47fd-944c-879aaf46b72c\" (UID: \"aa3415a3-bb89-47fd-944c-879aaf46b72c\") " Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.877354 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa3415a3-bb89-47fd-944c-879aaf46b72c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aa3415a3-bb89-47fd-944c-879aaf46b72c" (UID: "aa3415a3-bb89-47fd-944c-879aaf46b72c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.877542 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa3415a3-bb89-47fd-944c-879aaf46b72c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.886150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3415a3-bb89-47fd-944c-879aaf46b72c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aa3415a3-bb89-47fd-944c-879aaf46b72c" (UID: "aa3415a3-bb89-47fd-944c-879aaf46b72c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:46:21 crc kubenswrapper[4760]: I1227 05:46:21.978578 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa3415a3-bb89-47fd-944c-879aaf46b72c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:23 crc kubenswrapper[4760]: I1227 05:46:23.522118 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 27 05:46:23 crc kubenswrapper[4760]: I1227 05:46:23.877507 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-h4drj_abd7bc00-bf5b-48a1-94fe-82dae0bc732e/kube-multus-additional-cni-plugins/0.log" Dec 27 05:46:23 crc kubenswrapper[4760]: I1227 05:46:23.877563 4760 generic.go:334] "Generic (PLEG): container finished" podID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" exitCode=137 Dec 27 05:46:23 crc kubenswrapper[4760]: I1227 05:46:23.877660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" event={"ID":"abd7bc00-bf5b-48a1-94fe-82dae0bc732e","Type":"ContainerDied","Data":"d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668"} Dec 27 05:46:30 crc kubenswrapper[4760]: E1227 05:46:30.814749 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:30 crc kubenswrapper[4760]: E1227 05:46:30.816473 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:30 crc kubenswrapper[4760]: E1227 05:46:30.816963 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:30 crc kubenswrapper[4760]: E1227 05:46:30.817010 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:46:32 crc kubenswrapper[4760]: E1227 05:46:32.793054 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 27 05:46:32 crc kubenswrapper[4760]: E1227 05:46:32.793240 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6gp8b_openshift-marketplace(21439a0b-e71e-4574-87d5-cd7881120c41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:32 crc kubenswrapper[4760]: E1227 05:46:32.794388 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6gp8b" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" Dec 27 05:46:32 crc kubenswrapper[4760]: I1227 05:46:32.952356 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.952338836 podStartE2EDuration="9.952338836s" podCreationTimestamp="2025-12-27 05:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:46:27.565964833 +0000 UTC m=+110.326034228" watchObservedRunningTime="2025-12-27 05:46:32.952338836 +0000 UTC m=+115.712408151" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.485493 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 27 05:46:35 crc kubenswrapper[4760]: E1227 05:46:35.488010 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3415a3-bb89-47fd-944c-879aaf46b72c" containerName="pruner" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.488071 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3415a3-bb89-47fd-944c-879aaf46b72c" containerName="pruner" Dec 27 05:46:35 crc kubenswrapper[4760]: E1227 05:46:35.488149 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f127fa-1de2-4a5e-bb18-eecdbc5a01eb" containerName="pruner" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.488245 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f127fa-1de2-4a5e-bb18-eecdbc5a01eb" containerName="pruner" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.488528 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3415a3-bb89-47fd-944c-879aaf46b72c" containerName="pruner" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.488564 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f127fa-1de2-4a5e-bb18-eecdbc5a01eb" containerName="pruner" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.490772 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.494901 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.495682 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.495809 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.566719 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c05638c-2f19-4a13-8467-07f04339c86d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.566884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c05638c-2f19-4a13-8467-07f04339c86d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.669954 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c05638c-2f19-4a13-8467-07f04339c86d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.670052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c05638c-2f19-4a13-8467-07f04339c86d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.670238 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c05638c-2f19-4a13-8467-07f04339c86d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.713282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c05638c-2f19-4a13-8467-07f04339c86d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:35 crc kubenswrapper[4760]: I1227 05:46:35.839964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:46:39 crc kubenswrapper[4760]: I1227 05:46:39.882713 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 27 05:46:39 crc kubenswrapper[4760]: I1227 05:46:39.884742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:39 crc kubenswrapper[4760]: I1227 05:46:39.902136 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 27 05:46:39 crc kubenswrapper[4760]: I1227 05:46:39.932249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:39 crc kubenswrapper[4760]: I1227 05:46:39.932359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a400a4d-065a-4a2a-8852-0bdb47713da4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:39 crc kubenswrapper[4760]: I1227 05:46:39.932444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-var-lock\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.034033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.034208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a400a4d-065a-4a2a-8852-0bdb47713da4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.034289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-var-lock\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.034387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.034434 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-var-lock\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.054896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a400a4d-065a-4a2a-8852-0bdb47713da4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: I1227 05:46:40.242383 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:46:40 crc kubenswrapper[4760]: E1227 05:46:40.815063 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:40 crc kubenswrapper[4760]: E1227 05:46:40.815639 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:40 crc kubenswrapper[4760]: E1227 05:46:40.815964 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:40 crc kubenswrapper[4760]: E1227 05:46:40.816048 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:46:48 crc kubenswrapper[4760]: E1227 05:46:48.379988 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 27 05:46:48 crc kubenswrapper[4760]: E1227 05:46:48.380520 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8qmq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mr26d_openshift-marketplace(89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:48 crc kubenswrapper[4760]: E1227 05:46:48.381869 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mr26d" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" Dec 27 05:46:50 crc kubenswrapper[4760]: E1227 05:46:50.815299 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:50 crc kubenswrapper[4760]: E1227 05:46:50.816570 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:50 crc kubenswrapper[4760]: E1227 05:46:50.817078 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 27 05:46:50 crc kubenswrapper[4760]: E1227 05:46:50.817158 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:46:53 crc kubenswrapper[4760]: E1227 05:46:53.343424 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mr26d" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" Dec 27 05:46:55 crc kubenswrapper[4760]: E1227 05:46:55.124224 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 27 05:46:55 crc kubenswrapper[4760]: E1227 05:46:55.124694 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxs2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ch56x_openshift-marketplace(b811bcc1-2320-4047-93a9-9d79516a3551): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:55 crc kubenswrapper[4760]: E1227 05:46:55.125890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ch56x" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" Dec 27 05:46:56 crc kubenswrapper[4760]: E1227 05:46:56.548680 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 27 05:46:56 crc kubenswrapper[4760]: E1227 05:46:56.548981 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgvfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xqm68_openshift-marketplace(277187d7-c71a-4583-8d65-2e713e20557d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:56 crc kubenswrapper[4760]: E1227 05:46:56.550902 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xqm68" podUID="277187d7-c71a-4583-8d65-2e713e20557d" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.569223 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ch56x" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.569223 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xqm68" podUID="277187d7-c71a-4583-8d65-2e713e20557d" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.646467 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.646616 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hktl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lszcc_openshift-marketplace(4b5d003c-9d11-417b-aafd-19fde5a27981): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.647793 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lszcc" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.654612 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.654758 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b85hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7msw2_openshift-marketplace(55c01569-3cd2-4c5f-9039-b61176dac0f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.656020 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7msw2" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.656565 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.656692 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8w8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p6tss_openshift-marketplace(ee14eb8b-710a-4d45-b2a6-008c2f02b154): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.658406 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p6tss" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.671961 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.672148 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlk8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xz5fn_openshift-marketplace(dad6d9ba-2049-4d43-a786-9ce87644643f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 27 05:46:58 crc kubenswrapper[4760]: E1227 05:46:58.673764 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xz5fn" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.691525 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-h4drj_abd7bc00-bf5b-48a1-94fe-82dae0bc732e/kube-multus-additional-cni-plugins/0.log" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.691589 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.798389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-cni-sysctl-allowlist\") pod \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.798701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-ready\") pod \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.798854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxzvw\" (UniqueName: \"kubernetes.io/projected/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-kube-api-access-lxzvw\") pod \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.798883 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-tuning-conf-dir\") pod \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\" (UID: \"abd7bc00-bf5b-48a1-94fe-82dae0bc732e\") " Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.799178 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "abd7bc00-bf5b-48a1-94fe-82dae0bc732e" (UID: "abd7bc00-bf5b-48a1-94fe-82dae0bc732e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.799322 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-ready" (OuterVolumeSpecName: "ready") pod "abd7bc00-bf5b-48a1-94fe-82dae0bc732e" (UID: "abd7bc00-bf5b-48a1-94fe-82dae0bc732e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.799462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "abd7bc00-bf5b-48a1-94fe-82dae0bc732e" (UID: "abd7bc00-bf5b-48a1-94fe-82dae0bc732e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.806843 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-kube-api-access-lxzvw" (OuterVolumeSpecName: "kube-api-access-lxzvw") pod "abd7bc00-bf5b-48a1-94fe-82dae0bc732e" (UID: "abd7bc00-bf5b-48a1-94fe-82dae0bc732e"). InnerVolumeSpecName "kube-api-access-lxzvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.899511 4760 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-ready\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.899544 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxzvw\" (UniqueName: \"kubernetes.io/projected/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-kube-api-access-lxzvw\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.899556 4760 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:58 crc kubenswrapper[4760]: I1227 05:46:58.899564 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/abd7bc00-bf5b-48a1-94fe-82dae0bc732e-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 27 05:46:59 crc kubenswrapper[4760]: W1227 05:46:59.057281 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-2a4cc9faaa5927d7e426294aa35dd3d019aaa6fd038778956c44d414bceac7ff WatchSource:0}: Error finding container 2a4cc9faaa5927d7e426294aa35dd3d019aaa6fd038778956c44d414bceac7ff: Status 404 returned error can't find the container with id 2a4cc9faaa5927d7e426294aa35dd3d019aaa6fd038778956c44d414bceac7ff Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.098883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-h4drj_abd7bc00-bf5b-48a1-94fe-82dae0bc732e/kube-multus-additional-cni-plugins/0.log" Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.098973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" event={"ID":"abd7bc00-bf5b-48a1-94fe-82dae0bc732e","Type":"ContainerDied","Data":"dc45048db49673f214ae57a30d034defea7a961a5c7e3662abacc39268573fd0"} Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.098991 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-h4drj" Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.099039 4760 scope.go:117] "RemoveContainer" containerID="d792891bc6468e3e9041f06201242246f35265c228f2a413d90a4f929bddb668" Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.100342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d33f10f9b1967a60a9198dc8c027d17b1b726327d4e441a5fd6c2ccfc26a933d"} Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.103625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2a4cc9faaa5927d7e426294aa35dd3d019aaa6fd038778956c44d414bceac7ff"} Dec 27 05:46:59 crc kubenswrapper[4760]: E1227 05:46:59.104027 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p6tss" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" Dec 27 05:46:59 crc kubenswrapper[4760]: E1227 05:46:59.104853 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xz5fn" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" Dec 27 05:46:59 crc kubenswrapper[4760]: E1227 05:46:59.108333 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lszcc" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" Dec 27 05:46:59 crc kubenswrapper[4760]: E1227 05:46:59.108395 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7msw2" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.148297 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.194009 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 27 05:46:59 crc kubenswrapper[4760]: W1227 05:46:59.214342 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4c05638c_2f19_4a13_8467_07f04339c86d.slice/crio-cec5c356613e68d4146c2ec0099e1f194000bc423cd13f25a03e83f1f684a1c9 WatchSource:0}: Error finding container cec5c356613e68d4146c2ec0099e1f194000bc423cd13f25a03e83f1f684a1c9: Status 404 returned error can't find the container with id cec5c356613e68d4146c2ec0099e1f194000bc423cd13f25a03e83f1f684a1c9 Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.215015 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-h4drj"] Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.223808 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-h4drj"] Dec 27 05:46:59 crc kubenswrapper[4760]: I1227 05:46:59.512214 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" path="/var/lib/kubelet/pods/abd7bc00-bf5b-48a1-94fe-82dae0bc732e/volumes" Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.109838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ec216bbd126d441e9abea3bde5983d9935080392640314237c486a03511f796e"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.111458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a7aafd665e31d15e95f7a031813021aedc9e6e5d886ed25b8e71cf38a3f18cab"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.111597 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.113055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a400a4d-065a-4a2a-8852-0bdb47713da4","Type":"ContainerStarted","Data":"1ddd8ae8a8dbdd5f08f7754f11874039a223d83c56dc9bcda3dd09288b5329c8"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.113161 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a400a4d-065a-4a2a-8852-0bdb47713da4","Type":"ContainerStarted","Data":"237bbd50098ee180e043944b1d09b2b9e387ccf806edab4fb183a17b7f2db4fe"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.114505 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4c05638c-2f19-4a13-8467-07f04339c86d","Type":"ContainerStarted","Data":"d15a2004d4acd4913684aabc3f038e9e0b0b0717113dca2b8569f5273ba82960"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.114532 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4c05638c-2f19-4a13-8467-07f04339c86d","Type":"ContainerStarted","Data":"cec5c356613e68d4146c2ec0099e1f194000bc423cd13f25a03e83f1f684a1c9"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.115750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c4163e22704e5ff3f70c36700ab25abfbb84042e05160540fc4721d0a95d096f"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.115776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0294c0a3a06944458b3118dafc1c0d7357be7e63ac4b5dcb62a15a7ff62dce15"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.117695 4760 generic.go:334] "Generic (PLEG): container finished" podID="21439a0b-e71e-4574-87d5-cd7881120c41" containerID="9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2" exitCode=0 Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.117721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerDied","Data":"9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2"} Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.158771 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=25.158752934 podStartE2EDuration="25.158752934s" podCreationTimestamp="2025-12-27 05:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:47:00.158399085 +0000 UTC m=+142.918468490" watchObservedRunningTime="2025-12-27 05:47:00.158752934 +0000 UTC m=+142.918822259" Dec 27 05:47:00 crc kubenswrapper[4760]: I1227 05:47:00.173522 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=21.173502548 podStartE2EDuration="21.173502548s" podCreationTimestamp="2025-12-27 05:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:47:00.171469699 +0000 UTC m=+142.931539054" watchObservedRunningTime="2025-12-27 05:47:00.173502548 +0000 UTC m=+142.933571863" Dec 27 05:47:02 crc kubenswrapper[4760]: I1227 05:47:02.141801 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4c05638c-2f19-4a13-8467-07f04339c86d","Type":"ContainerDied","Data":"d15a2004d4acd4913684aabc3f038e9e0b0b0717113dca2b8569f5273ba82960"} Dec 27 05:47:02 crc kubenswrapper[4760]: I1227 05:47:02.141966 4760 generic.go:334] "Generic (PLEG): container finished" podID="4c05638c-2f19-4a13-8467-07f04339c86d" containerID="d15a2004d4acd4913684aabc3f038e9e0b0b0717113dca2b8569f5273ba82960" exitCode=0 Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.506494 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.661419 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c05638c-2f19-4a13-8467-07f04339c86d-kubelet-dir\") pod \"4c05638c-2f19-4a13-8467-07f04339c86d\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.661545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c05638c-2f19-4a13-8467-07f04339c86d-kube-api-access\") pod \"4c05638c-2f19-4a13-8467-07f04339c86d\" (UID: \"4c05638c-2f19-4a13-8467-07f04339c86d\") " Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.661629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c05638c-2f19-4a13-8467-07f04339c86d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c05638c-2f19-4a13-8467-07f04339c86d" (UID: "4c05638c-2f19-4a13-8467-07f04339c86d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.662536 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c05638c-2f19-4a13-8467-07f04339c86d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.669787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c05638c-2f19-4a13-8467-07f04339c86d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c05638c-2f19-4a13-8467-07f04339c86d" (UID: "4c05638c-2f19-4a13-8467-07f04339c86d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:47:03 crc kubenswrapper[4760]: I1227 05:47:03.764503 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c05638c-2f19-4a13-8467-07f04339c86d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:04 crc kubenswrapper[4760]: I1227 05:47:04.160120 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4c05638c-2f19-4a13-8467-07f04339c86d","Type":"ContainerDied","Data":"cec5c356613e68d4146c2ec0099e1f194000bc423cd13f25a03e83f1f684a1c9"} Dec 27 05:47:04 crc kubenswrapper[4760]: I1227 05:47:04.160179 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec5c356613e68d4146c2ec0099e1f194000bc423cd13f25a03e83f1f684a1c9" Dec 27 05:47:04 crc kubenswrapper[4760]: I1227 05:47:04.160232 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 27 05:47:05 crc kubenswrapper[4760]: I1227 05:47:05.170411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerStarted","Data":"3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815"} Dec 27 05:47:05 crc kubenswrapper[4760]: I1227 05:47:05.195706 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6gp8b" podStartSLOduration=5.117535226 podStartE2EDuration="1m15.195686729s" podCreationTimestamp="2025-12-27 05:45:50 +0000 UTC" firstStartedPulling="2025-12-27 05:45:54.380651952 +0000 UTC m=+77.140721267" lastFinishedPulling="2025-12-27 05:47:04.458803435 +0000 UTC m=+147.218872770" observedRunningTime="2025-12-27 05:47:05.195176277 +0000 UTC m=+147.955245652" watchObservedRunningTime="2025-12-27 05:47:05.195686729 +0000 UTC m=+147.955756054" Dec 27 05:47:05 crc kubenswrapper[4760]: I1227 05:47:05.287410 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:47:05 crc kubenswrapper[4760]: I1227 05:47:05.287511 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:47:11 crc kubenswrapper[4760]: I1227 05:47:11.222507 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:47:11 crc kubenswrapper[4760]: I1227 05:47:11.223246 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:47:11 crc kubenswrapper[4760]: I1227 05:47:11.468451 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:47:12 crc kubenswrapper[4760]: I1227 05:47:12.278316 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:47:13 crc kubenswrapper[4760]: I1227 05:47:13.220726 4760 generic.go:334] "Generic (PLEG): container finished" podID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerID="bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064" exitCode=0 Dec 27 05:47:13 crc kubenswrapper[4760]: I1227 05:47:13.220844 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr26d" event={"ID":"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1","Type":"ContainerDied","Data":"bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064"} Dec 27 05:47:14 crc kubenswrapper[4760]: I1227 05:47:14.548276 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gp8b"] Dec 27 05:47:14 crc kubenswrapper[4760]: I1227 05:47:14.549070 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6gp8b" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="registry-server" containerID="cri-o://3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815" gracePeriod=2 Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.203286 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.221572 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-catalog-content\") pod \"21439a0b-e71e-4574-87d5-cd7881120c41\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.221667 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jx74\" (UniqueName: \"kubernetes.io/projected/21439a0b-e71e-4574-87d5-cd7881120c41-kube-api-access-8jx74\") pod \"21439a0b-e71e-4574-87d5-cd7881120c41\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.221712 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-utilities\") pod \"21439a0b-e71e-4574-87d5-cd7881120c41\" (UID: \"21439a0b-e71e-4574-87d5-cd7881120c41\") " Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.222629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-utilities" (OuterVolumeSpecName: "utilities") pod "21439a0b-e71e-4574-87d5-cd7881120c41" (UID: "21439a0b-e71e-4574-87d5-cd7881120c41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.228582 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21439a0b-e71e-4574-87d5-cd7881120c41-kube-api-access-8jx74" (OuterVolumeSpecName: "kube-api-access-8jx74") pod "21439a0b-e71e-4574-87d5-cd7881120c41" (UID: "21439a0b-e71e-4574-87d5-cd7881120c41"). InnerVolumeSpecName "kube-api-access-8jx74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.247138 4760 generic.go:334] "Generic (PLEG): container finished" podID="21439a0b-e71e-4574-87d5-cd7881120c41" containerID="3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815" exitCode=0 Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.247185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerDied","Data":"3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.247249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gp8b" event={"ID":"21439a0b-e71e-4574-87d5-cd7881120c41","Type":"ContainerDied","Data":"4902c247d665c6466d227f324414ee70a93302052a507eece4873f8506af1d0f"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.247250 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gp8b" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.247269 4760 scope.go:117] "RemoveContainer" containerID="3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.254068 4760 generic.go:334] "Generic (PLEG): container finished" podID="b811bcc1-2320-4047-93a9-9d79516a3551" containerID="4b535231187429afdc707a819278bb143d3608c6ecc52650f909cb6bc0fd057d" exitCode=0 Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.254161 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch56x" event={"ID":"b811bcc1-2320-4047-93a9-9d79516a3551","Type":"ContainerDied","Data":"4b535231187429afdc707a819278bb143d3608c6ecc52650f909cb6bc0fd057d"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.259799 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerStarted","Data":"ae577f4199318e7f29f75a78f7a1872f635cf644da91649680d8a5fece1da4de"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.268372 4760 scope.go:117] "RemoveContainer" containerID="9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.268896 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerID="d5594fca9d51a7b4b8fd69e4f3439063f08f5810b2992d11e8bd542dddcde6c9" exitCode=0 Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.268959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6tss" event={"ID":"ee14eb8b-710a-4d45-b2a6-008c2f02b154","Type":"ContainerDied","Data":"d5594fca9d51a7b4b8fd69e4f3439063f08f5810b2992d11e8bd542dddcde6c9"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.282116 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21439a0b-e71e-4574-87d5-cd7881120c41" (UID: "21439a0b-e71e-4574-87d5-cd7881120c41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.283549 4760 generic.go:334] "Generic (PLEG): container finished" podID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerID="abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863" exitCode=0 Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.283626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerDied","Data":"abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.288661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr26d" event={"ID":"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1","Type":"ContainerStarted","Data":"cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.291753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerStarted","Data":"fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.298887 4760 generic.go:334] "Generic (PLEG): container finished" podID="277187d7-c71a-4583-8d65-2e713e20557d" containerID="1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a" exitCode=0 Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.299001 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqm68" event={"ID":"277187d7-c71a-4583-8d65-2e713e20557d","Type":"ContainerDied","Data":"1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a"} Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.318527 4760 scope.go:117] "RemoveContainer" containerID="c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.332861 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.332897 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jx74\" (UniqueName: \"kubernetes.io/projected/21439a0b-e71e-4574-87d5-cd7881120c41-kube-api-access-8jx74\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.332911 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21439a0b-e71e-4574-87d5-cd7881120c41-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.341502 4760 scope.go:117] "RemoveContainer" containerID="3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.346239 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mr26d" podStartSLOduration=5.634738044 podStartE2EDuration="1m23.346223518s" podCreationTimestamp="2025-12-27 05:45:52 +0000 UTC" firstStartedPulling="2025-12-27 05:45:56.624372509 +0000 UTC m=+79.384441824" lastFinishedPulling="2025-12-27 05:47:14.335857973 +0000 UTC m=+157.095927298" observedRunningTime="2025-12-27 05:47:15.344629379 +0000 UTC m=+158.104698704" watchObservedRunningTime="2025-12-27 05:47:15.346223518 +0000 UTC m=+158.106292833" Dec 27 05:47:15 crc kubenswrapper[4760]: E1227 05:47:15.346397 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815\": container with ID starting with 3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815 not found: ID does not exist" containerID="3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.346449 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815"} err="failed to get container status \"3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815\": rpc error: code = NotFound desc = could not find container \"3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815\": container with ID starting with 3ef22e4141ff429ad36aab5aeb526ff8e0bb852752971e01a9ed8c302d5b0815 not found: ID does not exist" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.346505 4760 scope.go:117] "RemoveContainer" containerID="9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2" Dec 27 05:47:15 crc kubenswrapper[4760]: E1227 05:47:15.347396 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2\": container with ID starting with 9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2 not found: ID does not exist" containerID="9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.347420 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2"} err="failed to get container status \"9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2\": rpc error: code = NotFound desc = could not find container \"9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2\": container with ID starting with 9f0a4a6cab1d3e665406cbaa0d99c75ab3ce01301c9dc412a78c8977c85caec2 not found: ID does not exist" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.347437 4760 scope.go:117] "RemoveContainer" containerID="c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc" Dec 27 05:47:15 crc kubenswrapper[4760]: E1227 05:47:15.347637 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc\": container with ID starting with c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc not found: ID does not exist" containerID="c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.347658 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc"} err="failed to get container status \"c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc\": rpc error: code = NotFound desc = could not find container \"c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc\": container with ID starting with c166898314fbab3fb25d8c83db68f23af5459f76c4fad192abd29d93b1b907dc not found: ID does not exist" Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.570147 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gp8b"] Dec 27 05:47:15 crc kubenswrapper[4760]: I1227 05:47:15.573737 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6gp8b"] Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.307862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqm68" event={"ID":"277187d7-c71a-4583-8d65-2e713e20557d","Type":"ContainerStarted","Data":"b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089"} Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.311522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch56x" event={"ID":"b811bcc1-2320-4047-93a9-9d79516a3551","Type":"ContainerStarted","Data":"f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6"} Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.313761 4760 generic.go:334] "Generic (PLEG): container finished" podID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerID="ae577f4199318e7f29f75a78f7a1872f635cf644da91649680d8a5fece1da4de" exitCode=0 Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.313821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerDied","Data":"ae577f4199318e7f29f75a78f7a1872f635cf644da91649680d8a5fece1da4de"} Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.317031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerStarted","Data":"d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa"} Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.320934 4760 generic.go:334] "Generic (PLEG): container finished" podID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerID="fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608" exitCode=0 Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.320965 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerDied","Data":"fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608"} Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.328818 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xqm68" podStartSLOduration=3.926315662 podStartE2EDuration="1m26.328804385s" podCreationTimestamp="2025-12-27 05:45:50 +0000 UTC" firstStartedPulling="2025-12-27 05:45:53.344838745 +0000 UTC m=+76.104908060" lastFinishedPulling="2025-12-27 05:47:15.747327468 +0000 UTC m=+158.507396783" observedRunningTime="2025-12-27 05:47:16.326824668 +0000 UTC m=+159.086893983" watchObservedRunningTime="2025-12-27 05:47:16.328804385 +0000 UTC m=+159.088873700" Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.366942 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ch56x" podStartSLOduration=3.876395467 podStartE2EDuration="1m26.36692284s" podCreationTimestamp="2025-12-27 05:45:50 +0000 UTC" firstStartedPulling="2025-12-27 05:45:53.227788219 +0000 UTC m=+75.987857534" lastFinishedPulling="2025-12-27 05:47:15.718315592 +0000 UTC m=+158.478384907" observedRunningTime="2025-12-27 05:47:16.363407205 +0000 UTC m=+159.123476530" watchObservedRunningTime="2025-12-27 05:47:16.36692284 +0000 UTC m=+159.126992155" Dec 27 05:47:16 crc kubenswrapper[4760]: I1227 05:47:16.382900 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lszcc" podStartSLOduration=4.80322303 podStartE2EDuration="1m26.382882592s" podCreationTimestamp="2025-12-27 05:45:50 +0000 UTC" firstStartedPulling="2025-12-27 05:45:54.347831158 +0000 UTC m=+77.107900473" lastFinishedPulling="2025-12-27 05:47:15.92749072 +0000 UTC m=+158.687560035" observedRunningTime="2025-12-27 05:47:16.379396329 +0000 UTC m=+159.139465654" watchObservedRunningTime="2025-12-27 05:47:16.382882592 +0000 UTC m=+159.142951897" Dec 27 05:47:17 crc kubenswrapper[4760]: I1227 05:47:17.327661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6tss" event={"ID":"ee14eb8b-710a-4d45-b2a6-008c2f02b154","Type":"ContainerStarted","Data":"3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb"} Dec 27 05:47:17 crc kubenswrapper[4760]: I1227 05:47:17.508278 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" path="/var/lib/kubelet/pods/21439a0b-e71e-4574-87d5-cd7881120c41/volumes" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.566065 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.566914 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.609075 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.626409 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p6tss" podStartSLOduration=9.083816183 podStartE2EDuration="1m28.626393837s" podCreationTimestamp="2025-12-27 05:45:52 +0000 UTC" firstStartedPulling="2025-12-27 05:45:56.666033359 +0000 UTC m=+79.426102674" lastFinishedPulling="2025-12-27 05:47:16.208611013 +0000 UTC m=+158.968680328" observedRunningTime="2025-12-27 05:47:17.354286582 +0000 UTC m=+160.114355917" watchObservedRunningTime="2025-12-27 05:47:20.626393837 +0000 UTC m=+163.386463152" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.750847 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.750892 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:47:20 crc kubenswrapper[4760]: I1227 05:47:20.789879 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:47:21 crc kubenswrapper[4760]: I1227 05:47:21.139461 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:47:21 crc kubenswrapper[4760]: I1227 05:47:21.139526 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:47:21 crc kubenswrapper[4760]: I1227 05:47:21.177699 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:47:21 crc kubenswrapper[4760]: I1227 05:47:21.385379 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:47:21 crc kubenswrapper[4760]: I1227 05:47:21.385436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:47:21 crc kubenswrapper[4760]: I1227 05:47:21.389829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:47:22 crc kubenswrapper[4760]: I1227 05:47:22.873475 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:47:22 crc kubenswrapper[4760]: I1227 05:47:22.873530 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:47:22 crc kubenswrapper[4760]: I1227 05:47:22.917981 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:47:22 crc kubenswrapper[4760]: I1227 05:47:22.946442 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ch56x"] Dec 27 05:47:23 crc kubenswrapper[4760]: I1227 05:47:23.301253 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:47:23 crc kubenswrapper[4760]: I1227 05:47:23.301305 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:47:23 crc kubenswrapper[4760]: I1227 05:47:23.339827 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:47:23 crc kubenswrapper[4760]: I1227 05:47:23.361277 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ch56x" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="registry-server" containerID="cri-o://f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6" gracePeriod=2 Dec 27 05:47:23 crc kubenswrapper[4760]: I1227 05:47:23.398999 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:47:23 crc kubenswrapper[4760]: I1227 05:47:23.435270 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:47:25 crc kubenswrapper[4760]: I1227 05:47:25.344718 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6tss"] Dec 27 05:47:25 crc kubenswrapper[4760]: I1227 05:47:25.375897 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p6tss" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="registry-server" containerID="cri-o://3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb" gracePeriod=2 Dec 27 05:47:26 crc kubenswrapper[4760]: I1227 05:47:26.382222 4760 generic.go:334] "Generic (PLEG): container finished" podID="b811bcc1-2320-4047-93a9-9d79516a3551" containerID="f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6" exitCode=0 Dec 27 05:47:26 crc kubenswrapper[4760]: I1227 05:47:26.382272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch56x" event={"ID":"b811bcc1-2320-4047-93a9-9d79516a3551","Type":"ContainerDied","Data":"f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6"} Dec 27 05:47:31 crc kubenswrapper[4760]: E1227 05:47:31.139873 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6 is running failed: container process not found" containerID="f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6" cmd=["grpc_health_probe","-addr=:50051"] Dec 27 05:47:31 crc kubenswrapper[4760]: E1227 05:47:31.140545 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6 is running failed: container process not found" containerID="f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6" cmd=["grpc_health_probe","-addr=:50051"] Dec 27 05:47:31 crc kubenswrapper[4760]: E1227 05:47:31.140804 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6 is running failed: container process not found" containerID="f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6" cmd=["grpc_health_probe","-addr=:50051"] Dec 27 05:47:31 crc kubenswrapper[4760]: E1227 05:47:31.140849 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-ch56x" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="registry-server" Dec 27 05:47:31 crc kubenswrapper[4760]: I1227 05:47:31.711638 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x4v5w"] Dec 27 05:47:33 crc kubenswrapper[4760]: E1227 05:47:33.307360 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb is running failed: container process not found" containerID="3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb" cmd=["grpc_health_probe","-addr=:50051"] Dec 27 05:47:33 crc kubenswrapper[4760]: E1227 05:47:33.309562 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb is running failed: container process not found" containerID="3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb" cmd=["grpc_health_probe","-addr=:50051"] Dec 27 05:47:33 crc kubenswrapper[4760]: E1227 05:47:33.310498 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb is running failed: container process not found" containerID="3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb" cmd=["grpc_health_probe","-addr=:50051"] Dec 27 05:47:33 crc kubenswrapper[4760]: E1227 05:47:33.310577 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p6tss" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="registry-server" Dec 27 05:47:33 crc kubenswrapper[4760]: I1227 05:47:33.444308 4760 patch_prober.go:28] interesting pod/console-operator-58897d9998-2x9wk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 05:47:33 crc kubenswrapper[4760]: I1227 05:47:33.444405 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2x9wk" podUID="fdf8f299-d7d3-4ac5-a276-23fc6a9aabf1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.193943 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.245672 4760 generic.go:334] "Generic (PLEG): container finished" podID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerID="3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb" exitCode=0 Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.245738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6tss" event={"ID":"ee14eb8b-710a-4d45-b2a6-008c2f02b154","Type":"ContainerDied","Data":"3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb"} Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.306782 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-catalog-content\") pod \"b811bcc1-2320-4047-93a9-9d79516a3551\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.306930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxs2g\" (UniqueName: \"kubernetes.io/projected/b811bcc1-2320-4047-93a9-9d79516a3551-kube-api-access-lxs2g\") pod \"b811bcc1-2320-4047-93a9-9d79516a3551\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.306982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-utilities\") pod \"b811bcc1-2320-4047-93a9-9d79516a3551\" (UID: \"b811bcc1-2320-4047-93a9-9d79516a3551\") " Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.308016 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-utilities" (OuterVolumeSpecName: "utilities") pod "b811bcc1-2320-4047-93a9-9d79516a3551" (UID: "b811bcc1-2320-4047-93a9-9d79516a3551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.320933 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b811bcc1-2320-4047-93a9-9d79516a3551-kube-api-access-lxs2g" (OuterVolumeSpecName: "kube-api-access-lxs2g") pod "b811bcc1-2320-4047-93a9-9d79516a3551" (UID: "b811bcc1-2320-4047-93a9-9d79516a3551"). InnerVolumeSpecName "kube-api-access-lxs2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.408974 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxs2g\" (UniqueName: \"kubernetes.io/projected/b811bcc1-2320-4047-93a9-9d79516a3551-kube-api-access-lxs2g\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.409028 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.743498 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b811bcc1-2320-4047-93a9-9d79516a3551" (UID: "b811bcc1-2320-4047-93a9-9d79516a3551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:47:34 crc kubenswrapper[4760]: I1227 05:47:34.817235 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b811bcc1-2320-4047-93a9-9d79516a3551-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.265837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch56x" event={"ID":"b811bcc1-2320-4047-93a9-9d79516a3551","Type":"ContainerDied","Data":"9db2dc094d59931b40eb009127740150e3df65c8a1e5c495f0b722cad8999ed2"} Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.265893 4760 scope.go:117] "RemoveContainer" containerID="f25cd89215f4a8b72ae13747a393fa9bb645f0dea6971bbd16f6dc85b3750df6" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.266044 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch56x" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.296412 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.296465 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.309985 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ch56x"] Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.315610 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ch56x"] Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.478629 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.517055 4760 scope.go:117] "RemoveContainer" containerID="4b535231187429afdc707a819278bb143d3608c6ecc52650f909cb6bc0fd057d" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.526029 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" path="/var/lib/kubelet/pods/b811bcc1-2320-4047-93a9-9d79516a3551/volumes" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.570972 4760 scope.go:117] "RemoveContainer" containerID="fac26d0244bb6a500b8a27f89dff44c45e2f978acda2e66f10984b359d3f80c7" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.580821 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.626884 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-catalog-content\") pod \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.626982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-utilities\") pod \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.627044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8w8h\" (UniqueName: \"kubernetes.io/projected/ee14eb8b-710a-4d45-b2a6-008c2f02b154-kube-api-access-p8w8h\") pod \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\" (UID: \"ee14eb8b-710a-4d45-b2a6-008c2f02b154\") " Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.628121 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-utilities" (OuterVolumeSpecName: "utilities") pod "ee14eb8b-710a-4d45-b2a6-008c2f02b154" (UID: "ee14eb8b-710a-4d45-b2a6-008c2f02b154"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.632915 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee14eb8b-710a-4d45-b2a6-008c2f02b154-kube-api-access-p8w8h" (OuterVolumeSpecName: "kube-api-access-p8w8h") pod "ee14eb8b-710a-4d45-b2a6-008c2f02b154" (UID: "ee14eb8b-710a-4d45-b2a6-008c2f02b154"). InnerVolumeSpecName "kube-api-access-p8w8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.647828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee14eb8b-710a-4d45-b2a6-008c2f02b154" (UID: "ee14eb8b-710a-4d45-b2a6-008c2f02b154"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.727971 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.727999 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14eb8b-710a-4d45-b2a6-008c2f02b154-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:35 crc kubenswrapper[4760]: I1227 05:47:35.728009 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8w8h\" (UniqueName: \"kubernetes.io/projected/ee14eb8b-710a-4d45-b2a6-008c2f02b154-kube-api-access-p8w8h\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.272831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6tss" event={"ID":"ee14eb8b-710a-4d45-b2a6-008c2f02b154","Type":"ContainerDied","Data":"b704a291018ef2799a9849eda749e81292b8ca1938e62a877b9b3f426c54691f"} Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.272885 4760 scope.go:117] "RemoveContainer" containerID="3db98b117fa2f4102cfe33590ed007b7e8cc3584d33f6e7286855f582d382cdb" Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.272903 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6tss" Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.286278 4760 scope.go:117] "RemoveContainer" containerID="d5594fca9d51a7b4b8fd69e4f3439063f08f5810b2992d11e8bd542dddcde6c9" Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.306773 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6tss"] Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.307023 4760 scope.go:117] "RemoveContainer" containerID="07487603081c22e970427ddfc8a17c90d4e4003215de998185753048598b9e1e" Dec 27 05:47:36 crc kubenswrapper[4760]: I1227 05:47:36.316957 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6tss"] Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.210883 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211210 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="extract-content" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="extract-content" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211254 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211265 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211280 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="extract-utilities" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211291 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="extract-utilities" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211310 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="extract-utilities" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211321 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="extract-utilities" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211338 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="extract-utilities" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211351 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="extract-utilities" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211369 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="extract-content" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211380 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="extract-content" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211397 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211410 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211425 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="extract-content" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211437 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="extract-content" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211458 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c05638c-2f19-4a13-8467-07f04339c86d" containerName="pruner" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211469 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c05638c-2f19-4a13-8467-07f04339c86d" containerName="pruner" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211484 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211494 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.211507 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211518 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211675 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="21439a0b-e71e-4574-87d5-cd7881120c41" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211691 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd7bc00-bf5b-48a1-94fe-82dae0bc732e" containerName="kube-multus-additional-cni-plugins" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211706 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b811bcc1-2320-4047-93a9-9d79516a3551" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211724 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c05638c-2f19-4a13-8467-07f04339c86d" containerName="pruner" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.211734 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" containerName="registry-server" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212171 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212504 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42" gracePeriod=15 Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212587 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316" gracePeriod=15 Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212611 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928" gracePeriod=15 Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212671 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f" gracePeriod=15 Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.212703 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711" gracePeriod=15 Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215111 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215391 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215419 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215439 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215450 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215468 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215480 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215490 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215501 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215519 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215529 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215542 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215552 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.215564 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215574 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215735 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215757 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215768 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215783 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215799 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.215811 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.240837 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.254525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.254593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.254646 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.254688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.254715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356583 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356721 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356825 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.356834 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: E1227 05:47:37.419439 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-xz5fn.1884fc63567722a7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-xz5fn,UID:dad6d9ba-2049-4d43-a786-9ce87644643f,APIVersion:v1,ResourceVersion:28274,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-27 05:47:37.418687143 +0000 UTC m=+180.178756458,LastTimestamp:2025-12-27 05:47:37.418687143 +0000 UTC m=+180.178756458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.458272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.458343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.458372 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.458423 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.458469 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.458501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.506340 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.506851 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.530165 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee14eb8b-710a-4d45-b2a6-008c2f02b154" path="/var/lib/kubelet/pods/ee14eb8b-710a-4d45-b2a6-008c2f02b154/volumes" Dec 27 05:47:37 crc kubenswrapper[4760]: I1227 05:47:37.539413 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:47:38 crc kubenswrapper[4760]: I1227 05:47:38.286883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 27 05:47:38 crc kubenswrapper[4760]: I1227 05:47:38.288016 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 27 05:47:38 crc kubenswrapper[4760]: I1227 05:47:38.288760 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316" exitCode=0 Dec 27 05:47:38 crc kubenswrapper[4760]: I1227 05:47:38.288785 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711" exitCode=2 Dec 27 05:47:38 crc kubenswrapper[4760]: I1227 05:47:38.290363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerStarted","Data":"956517f983e1582927a6a1bb77f79377f347c3e3e9124e7b62db94d8798278c4"} Dec 27 05:47:39 crc kubenswrapper[4760]: I1227 05:47:39.297406 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 27 05:47:39 crc kubenswrapper[4760]: I1227 05:47:39.298696 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 27 05:47:39 crc kubenswrapper[4760]: I1227 05:47:39.299499 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f" exitCode=0 Dec 27 05:47:39 crc kubenswrapper[4760]: I1227 05:47:39.300462 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:39 crc kubenswrapper[4760]: I1227 05:47:39.301075 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:39 crc kubenswrapper[4760]: E1227 05:47:39.390156 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-xz5fn.1884fc63567722a7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-xz5fn,UID:dad6d9ba-2049-4d43-a786-9ce87644643f,APIVersion:v1,ResourceVersion:28274,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-27 05:47:37.418687143 +0000 UTC m=+180.178756458,LastTimestamp:2025-12-27 05:47:37.418687143 +0000 UTC m=+180.178756458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.307391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.309820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.310704 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928" exitCode=0 Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.310751 4760 scope.go:117] "RemoveContainer" containerID="8d274d296ff87cd34be9d6294bbca0c4a9f35ba3f22b4e1ac4a433e5a41ae10c" Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.313033 4760 generic.go:334] "Generic (PLEG): container finished" podID="7a400a4d-065a-4a2a-8852-0bdb47713da4" containerID="1ddd8ae8a8dbdd5f08f7754f11874039a223d83c56dc9bcda3dd09288b5329c8" exitCode=0 Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.313211 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a400a4d-065a-4a2a-8852-0bdb47713da4","Type":"ContainerDied","Data":"1ddd8ae8a8dbdd5f08f7754f11874039a223d83c56dc9bcda3dd09288b5329c8"} Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.313969 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.314559 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:40 crc kubenswrapper[4760]: I1227 05:47:40.314942 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:40 crc kubenswrapper[4760]: E1227 05:47:40.359570 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:47:41 crc kubenswrapper[4760]: I1227 05:47:41.321453 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 27 05:47:41 crc kubenswrapper[4760]: I1227 05:47:41.322238 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42" exitCode=0 Dec 27 05:47:41 crc kubenswrapper[4760]: E1227 05:47:41.932084 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:47:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:47:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:47:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-27T05:47:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0c4a61ac83cd2712228fb77777264e704260c0a754fbf8e760cecc0ca0903925\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:487b0ccf8b8646d9b5621e7278c6ba43d27e0e51ca4ba9b247931c5be79c235f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1650916690},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1e323f1ff6d547db870a1458e06a16459d34481c4099e6e14434f9cc1da26e23\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:21114cf925e0b1916d929ebcabdb5c6a1a2de1f98537dd04c3b2bb8b290e7bfa\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234822496},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8bec5b752213ebb96cc93ced7b5c62bd2d0e9a09b4bca4fb3cce9a1cfa067a7f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e48652001275c871bac4685831c1fa8383e9337ac0ce4740960a8297f581765d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203924821},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7c8e1e19438322bef5753810276d5f436dc998e14ab409a0aa309d3f207b3f93\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:fb55bf6f7b116b6e1d0d567adef72e1a81dc83d7508a9c21eb4202cf959bb9d6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1173412694},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:41 crc kubenswrapper[4760]: E1227 05:47:41.932875 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:41 crc kubenswrapper[4760]: E1227 05:47:41.933366 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:41 crc kubenswrapper[4760]: E1227 05:47:41.933648 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:41 crc kubenswrapper[4760]: E1227 05:47:41.933978 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:41 crc kubenswrapper[4760]: E1227 05:47:41.934007 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.869080 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.869640 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.870007 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.870324 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.924444 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-var-lock\") pod \"7a400a4d-065a-4a2a-8852-0bdb47713da4\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.924636 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-var-lock" (OuterVolumeSpecName: "var-lock") pod "7a400a4d-065a-4a2a-8852-0bdb47713da4" (UID: "7a400a4d-065a-4a2a-8852-0bdb47713da4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.924879 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-kubelet-dir\") pod \"7a400a4d-065a-4a2a-8852-0bdb47713da4\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.924929 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a400a4d-065a-4a2a-8852-0bdb47713da4" (UID: "7a400a4d-065a-4a2a-8852-0bdb47713da4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.924939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a400a4d-065a-4a2a-8852-0bdb47713da4-kube-api-access\") pod \"7a400a4d-065a-4a2a-8852-0bdb47713da4\" (UID: \"7a400a4d-065a-4a2a-8852-0bdb47713da4\") " Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.925348 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-var-lock\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.925361 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a400a4d-065a-4a2a-8852-0bdb47713da4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:42 crc kubenswrapper[4760]: I1227 05:47:42.929965 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a400a4d-065a-4a2a-8852-0bdb47713da4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a400a4d-065a-4a2a-8852-0bdb47713da4" (UID: "7a400a4d-065a-4a2a-8852-0bdb47713da4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.026685 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a400a4d-065a-4a2a-8852-0bdb47713da4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.359702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a400a4d-065a-4a2a-8852-0bdb47713da4","Type":"ContainerDied","Data":"237bbd50098ee180e043944b1d09b2b9e387ccf806edab4fb183a17b7f2db4fe"} Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.359759 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237bbd50098ee180e043944b1d09b2b9e387ccf806edab4fb183a17b7f2db4fe" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.359850 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.364151 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.364985 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4831ccc61772cbc169773f0aab75310ea1a1df0fafea2b51a2de8b6e0cade4b3" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.368847 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.369586 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.370210 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.370584 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.370867 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.371229 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: W1227 05:47:43.372713 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-0c599a6cc6adac973465c51f33186e50e375eaf3c5a3569824e7387eb13e874b WatchSource:0}: Error finding container 0c599a6cc6adac973465c51f33186e50e375eaf3c5a3569824e7387eb13e874b: Status 404 returned error can't find the container with id 0c599a6cc6adac973465c51f33186e50e375eaf3c5a3569824e7387eb13e874b Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.384829 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.385619 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.385879 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.386173 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.433181 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.433235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.433358 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.433591 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.433634 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.433579 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.508309 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.534645 4760 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.534674 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.534685 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.971972 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:47:43 crc kubenswrapper[4760]: I1227 05:47:43.972028 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.032898 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.033554 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.033977 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.034544 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.251874 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.252417 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.253573 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.254645 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.255047 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.255100 4760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.255434 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.377286 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b2186504817ffaeba6019af5447b14aef0074c04fd3d0420a95c1ab4700e406e"} Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.377369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0c599a6cc6adac973465c51f33186e50e375eaf3c5a3569824e7387eb13e874b"} Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.380270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerStarted","Data":"2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02"} Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.380328 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.381214 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.381654 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.382675 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.383168 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.384461 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.384762 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.385147 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.385468 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.415470 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.416051 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.416465 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.416765 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: I1227 05:47:44.417071 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.456844 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Dec 27 05:47:44 crc kubenswrapper[4760]: E1227 05:47:44.858186 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.386586 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.387459 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.387847 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.388392 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.388849 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.389390 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.389865 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.390167 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.390605 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: I1227 05:47:45.391126 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:45 crc kubenswrapper[4760]: E1227 05:47:45.660203 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Dec 27 05:47:47 crc kubenswrapper[4760]: E1227 05:47:47.261195 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Dec 27 05:47:47 crc kubenswrapper[4760]: I1227 05:47:47.504487 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:47 crc kubenswrapper[4760]: I1227 05:47:47.504987 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:47 crc kubenswrapper[4760]: I1227 05:47:47.505416 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:47 crc kubenswrapper[4760]: I1227 05:47:47.505708 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:49 crc kubenswrapper[4760]: E1227 05:47:49.392444 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-xz5fn.1884fc63567722a7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-xz5fn,UID:dad6d9ba-2049-4d43-a786-9ce87644643f,APIVersion:v1,ResourceVersion:28274,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-27 05:47:37.418687143 +0000 UTC m=+180.178756458,LastTimestamp:2025-12-27 05:47:37.418687143 +0000 UTC m=+180.178756458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.502601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.503779 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.504322 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.505278 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.505589 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.524213 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.524271 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:49 crc kubenswrapper[4760]: E1227 05:47:49.524839 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:49 crc kubenswrapper[4760]: I1227 05:47:49.525484 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:49 crc kubenswrapper[4760]: W1227 05:47:49.564767 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-51a8ed2b59e62d3ce70ffdd256f8ae29708ffa54da5ee725e6b4016013944c7e WatchSource:0}: Error finding container 51a8ed2b59e62d3ce70ffdd256f8ae29708ffa54da5ee725e6b4016013944c7e: Status 404 returned error can't find the container with id 51a8ed2b59e62d3ce70ffdd256f8ae29708ffa54da5ee725e6b4016013944c7e Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.417291 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.417683 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d"} Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.417593 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d" exitCode=1 Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.418303 4760 scope.go:117] "RemoveContainer" containerID="98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d" Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.418849 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.419527 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.419829 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.420026 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.420187 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51a8ed2b59e62d3ce70ffdd256f8ae29708ffa54da5ee725e6b4016013944c7e"} Dec 27 05:47:50 crc kubenswrapper[4760]: I1227 05:47:50.420337 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:50 crc kubenswrapper[4760]: E1227 05:47:50.462007 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="6.4s" Dec 27 05:47:50 crc kubenswrapper[4760]: E1227 05:47:50.483047 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-conmon-98f81063f948a7e73ec54ae07858c2946ffc65a2b6d9fad78da152ebf549ee5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.429338 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.429980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b2a88b658fe619c2bd897fc8b71ef37c52816248541b70ae683137bbb055c5a"} Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.431076 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.431593 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.431849 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.432178 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.432388 4760 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d28750ba283f49e18c6bd1f113dc58887d4c14b97b9e55d826fbf29d6d815c28" exitCode=0 Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.432429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d28750ba283f49e18c6bd1f113dc58887d4c14b97b9e55d826fbf29d6d815c28"} Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.432534 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.432627 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.432644 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.433068 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.433374 4760 status_manager.go:851] "Failed to get status for pod" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" pod="openshift-marketplace/redhat-operators-7msw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7msw2\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: E1227 05:47:51.433421 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.433657 4760 status_manager.go:851] "Failed to get status for pod" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.434021 4760 status_manager.go:851] "Failed to get status for pod" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" pod="openshift-marketplace/redhat-operators-xz5fn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xz5fn\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:51 crc kubenswrapper[4760]: I1227 05:47:51.434369 4760 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Dec 27 05:47:52 crc kubenswrapper[4760]: I1227 05:47:52.440439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"25df3e963444a717350e955757458e23820b1254f3bb1fd0bf01bea152d930f1"} Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fb27b7677877f7838229fbdc4b86579cbf36889cd02b444152830659d1f9ca75"} Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449433 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f7bed3fd45996c31ea24d28753275ac95dfa17d0695f9f776bf470499c2afef3"} Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449454 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7a544062d1945094091b9986b879702d3d456f6356bacc8dc0f02dc240e3951d"} Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef1cd63c712dba98bcbfaa820e3f43381904ec12eaff1efa4658139d9d70cbb1"} Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449543 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.449579 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.883447 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.883886 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.911949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:47:53 crc kubenswrapper[4760]: I1227 05:47:53.961030 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:47:54 crc kubenswrapper[4760]: I1227 05:47:54.498330 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:47:54 crc kubenswrapper[4760]: I1227 05:47:54.526572 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:54 crc kubenswrapper[4760]: I1227 05:47:54.526618 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:54 crc kubenswrapper[4760]: I1227 05:47:54.531419 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]log ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]etcd ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/generic-apiserver-start-informers ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/priority-and-fairness-filter ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-apiextensions-informers ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-apiextensions-controllers ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/crd-informer-synced ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-system-namespaces-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 27 05:47:54 crc kubenswrapper[4760]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/bootstrap-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/start-kube-aggregator-informers ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-registration-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-discovery-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]autoregister-completion ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-openapi-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 27 05:47:54 crc kubenswrapper[4760]: livez check failed Dec 27 05:47:54 crc kubenswrapper[4760]: I1227 05:47:54.531482 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 27 05:47:56 crc kubenswrapper[4760]: I1227 05:47:56.736744 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" podUID="933d294b-c115-4bd3-ade2-1ae37665ae1b" containerName="oauth-openshift" containerID="cri-o://60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f" gracePeriod=15 Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.152718 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.221174 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tchcj\" (UniqueName: \"kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.221248 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-policies\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.221316 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-trusted-ca-bundle\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.221370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-error\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.221407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-idp-0-file-data\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.221451 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-provider-selection\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222019 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222054 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-ocp-branding-template\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222696 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-serving-cert\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222762 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222796 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-cliconfig\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.222813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-service-ca\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.223343 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.223692 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.223760 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.223809 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-dir\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.223835 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs\") pod \"933d294b-c115-4bd3-ade2-1ae37665ae1b\" (UID: \"933d294b-c115-4bd3-ade2-1ae37665ae1b\") " Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.223879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.224287 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.224334 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.224349 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.224362 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.224373 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/933d294b-c115-4bd3-ade2-1ae37665ae1b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.226959 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj" (OuterVolumeSpecName: "kube-api-access-tchcj") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "kube-api-access-tchcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.227211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.228953 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.229013 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.229210 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.229373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.229760 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.229881 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.232508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "933d294b-c115-4bd3-ade2-1ae37665ae1b" (UID: "933d294b-c115-4bd3-ade2-1ae37665ae1b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325412 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325449 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325461 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325474 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325482 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325491 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325499 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325509 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/933d294b-c115-4bd3-ade2-1ae37665ae1b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.325519 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tchcj\" (UniqueName: \"kubernetes.io/projected/933d294b-c115-4bd3-ade2-1ae37665ae1b-kube-api-access-tchcj\") on node \"crc\" DevicePath \"\"" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.476206 4760 generic.go:334] "Generic (PLEG): container finished" podID="933d294b-c115-4bd3-ade2-1ae37665ae1b" containerID="60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f" exitCode=0 Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.476307 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.476338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" event={"ID":"933d294b-c115-4bd3-ade2-1ae37665ae1b","Type":"ContainerDied","Data":"60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f"} Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.476400 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x4v5w" event={"ID":"933d294b-c115-4bd3-ade2-1ae37665ae1b","Type":"ContainerDied","Data":"82d199572b4f89cd8b3d61f4c8ea3e1de7904d0318ded3373a86d2b45a60ac8a"} Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.476422 4760 scope.go:117] "RemoveContainer" containerID="60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.497221 4760 scope.go:117] "RemoveContainer" containerID="60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f" Dec 27 05:47:57 crc kubenswrapper[4760]: E1227 05:47:57.497695 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f\": container with ID starting with 60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f not found: ID does not exist" containerID="60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f" Dec 27 05:47:57 crc kubenswrapper[4760]: I1227 05:47:57.497733 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f"} err="failed to get container status \"60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f\": rpc error: code = NotFound desc = could not find container \"60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f\": container with ID starting with 60144e399dad85c8406261b1cc4ea674d6fb860e4794bb5fd3ebd476e13dea7f not found: ID does not exist" Dec 27 05:47:58 crc kubenswrapper[4760]: I1227 05:47:58.133151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:47:58 crc kubenswrapper[4760]: I1227 05:47:58.141234 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:47:58 crc kubenswrapper[4760]: I1227 05:47:58.462848 4760 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:47:58 crc kubenswrapper[4760]: I1227 05:47:58.482805 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:58 crc kubenswrapper[4760]: I1227 05:47:58.482841 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0d712cd-5e1e-4256-8df7-8ab34945b2aa" Dec 27 05:47:58 crc kubenswrapper[4760]: I1227 05:47:58.579716 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eb10eed0-51c8-4ded-aba7-bf06cf5dd736" Dec 27 05:48:00 crc kubenswrapper[4760]: E1227 05:48:00.615987 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:48:03 crc kubenswrapper[4760]: I1227 05:48:03.915462 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 27 05:48:05 crc kubenswrapper[4760]: I1227 05:48:05.288140 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:48:05 crc kubenswrapper[4760]: I1227 05:48:05.288224 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:48:05 crc kubenswrapper[4760]: I1227 05:48:05.288286 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:48:05 crc kubenswrapper[4760]: I1227 05:48:05.289004 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 05:48:05 crc kubenswrapper[4760]: I1227 05:48:05.289130 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79" gracePeriod=600 Dec 27 05:48:06 crc kubenswrapper[4760]: I1227 05:48:06.534946 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79" exitCode=0 Dec 27 05:48:06 crc kubenswrapper[4760]: I1227 05:48:06.535050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79"} Dec 27 05:48:07 crc kubenswrapper[4760]: I1227 05:48:07.545292 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"c28937ac367ba8ff932ff70ef08e94c615fc93fe52c601f6ac3e8ae45cf34b0d"} Dec 27 05:48:09 crc kubenswrapper[4760]: I1227 05:48:09.065539 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 27 05:48:09 crc kubenswrapper[4760]: I1227 05:48:09.371719 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 27 05:48:09 crc kubenswrapper[4760]: I1227 05:48:09.599436 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 27 05:48:09 crc kubenswrapper[4760]: I1227 05:48:09.865950 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 27 05:48:09 crc kubenswrapper[4760]: I1227 05:48:09.944546 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.038446 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.088823 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.214904 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.278213 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.278559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.311837 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.409678 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.600423 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.670395 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.716479 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 27 05:48:10 crc kubenswrapper[4760]: E1227 05:48:10.742867 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:48:10 crc kubenswrapper[4760]: I1227 05:48:10.810485 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.126724 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.232913 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.314884 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.538382 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.545649 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.714788 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.818969 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 27 05:48:11 crc kubenswrapper[4760]: I1227 05:48:11.944901 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.191624 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.222451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.292432 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.321792 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.423585 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.476361 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.564768 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.585003 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.755908 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 27 05:48:12 crc kubenswrapper[4760]: I1227 05:48:12.950631 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.008473 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.020037 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.047943 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.059759 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.084035 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.129061 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.144348 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.165586 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.236122 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.245375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.254041 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.313882 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.367549 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.369158 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.383070 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.383119 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.457425 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.571771 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.640314 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.657733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.738373 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.747834 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.758958 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.787707 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.793577 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.794717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.810149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.810232 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 27 05:48:13 crc kubenswrapper[4760]: I1227 05:48:13.850416 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.010982 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.125319 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.181013 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.215636 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.221872 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.231404 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.244718 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.263297 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.268690 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.344393 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.372803 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.379216 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.393625 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.414046 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.495752 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.501500 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.610246 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.691968 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.762254 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.764902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.847747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.906183 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 27 05:48:14 crc kubenswrapper[4760]: I1227 05:48:14.933657 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.117027 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.137556 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.137733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.176207 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.194269 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.261252 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.453756 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.467127 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.496292 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.570556 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.609149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.734663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.767875 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.777885 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.782373 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.794049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.839015 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.846012 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.866259 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.885619 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.887643 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 27 05:48:15 crc kubenswrapper[4760]: I1227 05:48:15.962494 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.004766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.018310 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.031217 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.081696 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.173916 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.362332 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.368712 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.369937 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xz5fn" podStartSLOduration=45.04808004 podStartE2EDuration="2m23.369914896s" podCreationTimestamp="2025-12-27 05:45:53 +0000 UTC" firstStartedPulling="2025-12-27 05:45:56.6497324 +0000 UTC m=+79.409801715" lastFinishedPulling="2025-12-27 05:47:34.971567226 +0000 UTC m=+177.731636571" observedRunningTime="2025-12-27 05:47:58.509293247 +0000 UTC m=+201.269362572" watchObservedRunningTime="2025-12-27 05:48:16.369914896 +0000 UTC m=+219.129984251" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.372565 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.372547867 podStartE2EDuration="39.372547867s" podCreationTimestamp="2025-12-27 05:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:47:58.526562796 +0000 UTC m=+201.286632121" watchObservedRunningTime="2025-12-27 05:48:16.372547867 +0000 UTC m=+219.132617212" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.372793 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7msw2" podStartSLOduration=36.716532185 podStartE2EDuration="2m23.372784163s" podCreationTimestamp="2025-12-27 05:45:53 +0000 UTC" firstStartedPulling="2025-12-27 05:45:56.642233527 +0000 UTC m=+79.402302842" lastFinishedPulling="2025-12-27 05:47:43.298485505 +0000 UTC m=+186.058554820" observedRunningTime="2025-12-27 05:47:58.481763938 +0000 UTC m=+201.241833263" watchObservedRunningTime="2025-12-27 05:48:16.372784163 +0000 UTC m=+219.132853508" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.376820 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x4v5w","openshift-kube-apiserver/kube-apiserver-crc"] Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.376932 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.381573 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.405893 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.405868284 podStartE2EDuration="18.405868284s" podCreationTimestamp="2025-12-27 05:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:48:16.405803063 +0000 UTC m=+219.165872438" watchObservedRunningTime="2025-12-27 05:48:16.405868284 +0000 UTC m=+219.165937639" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.419983 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.493689 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.533861 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.554786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.567383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.723977 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.727293 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.768185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.946732 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 27 05:48:16 crc kubenswrapper[4760]: I1227 05:48:16.949768 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.005696 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.039022 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.143810 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.146981 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.260657 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.285730 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.289592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.298393 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.399531 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.420892 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.514524 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.517047 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933d294b-c115-4bd3-ade2-1ae37665ae1b" path="/var/lib/kubelet/pods/933d294b-c115-4bd3-ade2-1ae37665ae1b/volumes" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.546363 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.554538 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.569403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.621496 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.691215 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.706996 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 27 05:48:17 crc kubenswrapper[4760]: I1227 05:48:17.981506 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.018614 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.072988 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.078983 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.085213 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.158162 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.176798 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.180201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.205173 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.205321 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.328505 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.488684 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.610372 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.638585 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.758754 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.794605 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 27 05:48:18 crc kubenswrapper[4760]: I1227 05:48:18.897705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.039410 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.077606 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.144575 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.148842 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.351986 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.386314 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.421293 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.440453 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.533810 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.540951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.549011 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.561272 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.577202 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.743202 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.843793 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.866283 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 27 05:48:19 crc kubenswrapper[4760]: I1227 05:48:19.991012 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.045048 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.045816 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.054961 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.145331 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.370999 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.373524 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.418391 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.451017 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.568939 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.667672 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.800705 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 27 05:48:20 crc kubenswrapper[4760]: E1227 05:48:20.845847 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:48:20 crc kubenswrapper[4760]: I1227 05:48:20.962493 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.048252 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.049408 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.174680 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.174971 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b2186504817ffaeba6019af5447b14aef0074c04fd3d0420a95c1ab4700e406e" gracePeriod=5 Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.471142 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.571885 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.641255 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.792116 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.923516 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.941901 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 27 05:48:21 crc kubenswrapper[4760]: I1227 05:48:21.990412 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.028563 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.191656 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.202946 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.215561 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.357735 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.357956 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.481357 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.607940 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.639055 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.809529 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.847156 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.886826 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.921461 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 27 05:48:22 crc kubenswrapper[4760]: I1227 05:48:22.981934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.032262 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.062610 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.119649 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.276328 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.394644 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.508139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.519745 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.533690 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.685787 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.696889 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.712780 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.728726 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.732283 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.758314 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 27 05:48:23 crc kubenswrapper[4760]: I1227 05:48:23.962136 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.080592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.175198 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.439027 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.446723 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.463852 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76b96558df-wcngt"] Dec 27 05:48:24 crc kubenswrapper[4760]: E1227 05:48:24.464061 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933d294b-c115-4bd3-ade2-1ae37665ae1b" containerName="oauth-openshift" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464074 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="933d294b-c115-4bd3-ade2-1ae37665ae1b" containerName="oauth-openshift" Dec 27 05:48:24 crc kubenswrapper[4760]: E1227 05:48:24.464085 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" containerName="installer" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464103 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" containerName="installer" Dec 27 05:48:24 crc kubenswrapper[4760]: E1227 05:48:24.464117 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464122 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464217 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="933d294b-c115-4bd3-ade2-1ae37665ae1b" containerName="oauth-openshift" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464234 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464243 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a400a4d-065a-4a2a-8852-0bdb47713da4" containerName="installer" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.464578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.472945 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.473384 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.473651 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.472983 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.477634 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.473026 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.481422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485375 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-error\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485695 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-session\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485741 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-audit-policies\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485881 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-login\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9x4\" (UniqueName: \"kubernetes.io/projected/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-kube-api-access-qx9x4\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.485972 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-audit-dir\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.473073 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.486396 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.473127 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.479870 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.480048 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.486653 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.480069 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.487432 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.489471 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.521279 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.528266 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76b96558df-wcngt"] Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.586996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587118 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587209 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-session\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-audit-policies\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587297 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587344 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587412 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-login\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx9x4\" (UniqueName: \"kubernetes.io/projected/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-kube-api-access-qx9x4\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-audit-dir\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587606 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.587685 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-error\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.588513 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-audit-dir\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.589927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.590435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.591072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-audit-policies\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.591635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.593647 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.593723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.594388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.594487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-error\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.596219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.596997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.606455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-user-template-login\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.609917 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx9x4\" (UniqueName: \"kubernetes.io/projected/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-kube-api-access-qx9x4\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.612546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6a36a05-81cf-405a-8b2f-f81cbf7eec5a-v4-0-config-system-session\") pod \"oauth-openshift-76b96558df-wcngt\" (UID: \"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a\") " pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.689118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.831456 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.877384 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 27 05:48:24 crc kubenswrapper[4760]: I1227 05:48:24.910389 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.029324 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76b96558df-wcngt"] Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.061191 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.353070 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.477217 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.655555 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.686747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" event={"ID":"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a","Type":"ContainerStarted","Data":"445423bee74bb08951147b5fb782eb1888ebfce6d7e2443ec7e0197692f13ab8"} Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.686853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" event={"ID":"b6a36a05-81cf-405a-8b2f-f81cbf7eec5a","Type":"ContainerStarted","Data":"bab520ae6cb85b6dab8c155bda9440591eafc02daba8f490040ca102afc6269b"} Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.688445 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.697045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" Dec 27 05:48:25 crc kubenswrapper[4760]: I1227 05:48:25.751709 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76b96558df-wcngt" podStartSLOduration=54.751686512 podStartE2EDuration="54.751686512s" podCreationTimestamp="2025-12-27 05:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:48:25.724328166 +0000 UTC m=+228.484397521" watchObservedRunningTime="2025-12-27 05:48:25.751686512 +0000 UTC m=+228.511755837" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.329475 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.693926 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.694007 4760 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b2186504817ffaeba6019af5447b14aef0074c04fd3d0420a95c1ab4700e406e" exitCode=137 Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.779940 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.780025 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.817761 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.817829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.817834 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.817861 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.817945 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.817990 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818250 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818617 4760 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818659 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818680 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.818698 4760 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.830747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:48:26 crc kubenswrapper[4760]: I1227 05:48:26.920566 4760 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.515759 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.516752 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.528792 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.528833 4760 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="655accff-01f0-4f62-88e3-0ca30b02c8f4" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.534820 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.534996 4760 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="655accff-01f0-4f62-88e3-0ca30b02c8f4" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.701508 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.701681 4760 scope.go:117] "RemoveContainer" containerID="b2186504817ffaeba6019af5447b14aef0074c04fd3d0420a95c1ab4700e406e" Dec 27 05:48:27 crc kubenswrapper[4760]: I1227 05:48:27.701695 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 27 05:48:30 crc kubenswrapper[4760]: E1227 05:48:30.954423 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316.scope\": RecentStats: unable to find data in memory cache]" Dec 27 05:48:54 crc kubenswrapper[4760]: I1227 05:48:54.413505 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zl9wx"] Dec 27 05:48:54 crc kubenswrapper[4760]: I1227 05:48:54.414305 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" podUID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" containerName="controller-manager" containerID="cri-o://34715e22af8532949887ebed877337c18deb39cdb9fcd87307a27aca6db733c6" gracePeriod=30 Dec 27 05:48:54 crc kubenswrapper[4760]: I1227 05:48:54.530553 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2"] Dec 27 05:48:54 crc kubenswrapper[4760]: I1227 05:48:54.531015 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" podUID="e0c1456f-b18f-4c71-a1f8-319ec8b012a1" containerName="route-controller-manager" containerID="cri-o://79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356" gracePeriod=30 Dec 27 05:48:54 crc kubenswrapper[4760]: I1227 05:48:54.855580 4760 generic.go:334] "Generic (PLEG): container finished" podID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" containerID="34715e22af8532949887ebed877337c18deb39cdb9fcd87307a27aca6db733c6" exitCode=0 Dec 27 05:48:54 crc kubenswrapper[4760]: I1227 05:48:54.855639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" event={"ID":"3c14515f-ee0e-4560-bed2-7ef5160b61ec","Type":"ContainerDied","Data":"34715e22af8532949887ebed877337c18deb39cdb9fcd87307a27aca6db733c6"} Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.339615 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.391384 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c14515f-ee0e-4560-bed2-7ef5160b61ec-serving-cert\") pod \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmrx2\" (UniqueName: \"kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2\") pod \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402519 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-config\") pod \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402538 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-config\") pod \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402554 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-client-ca\") pod \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402571 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert\") pod \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sksfr\" (UniqueName: \"kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr\") pod \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402611 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-proxy-ca-bundles\") pod \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\" (UID: \"3c14515f-ee0e-4560-bed2-7ef5160b61ec\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.402630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-client-ca\") pod \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\" (UID: \"e0c1456f-b18f-4c71-a1f8-319ec8b012a1\") " Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.403913 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c14515f-ee0e-4560-bed2-7ef5160b61ec" (UID: "3c14515f-ee0e-4560-bed2-7ef5160b61ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.404903 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-config" (OuterVolumeSpecName: "config") pod "e0c1456f-b18f-4c71-a1f8-319ec8b012a1" (UID: "e0c1456f-b18f-4c71-a1f8-319ec8b012a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.405410 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c14515f-ee0e-4560-bed2-7ef5160b61ec" (UID: "3c14515f-ee0e-4560-bed2-7ef5160b61ec"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.405439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-config" (OuterVolumeSpecName: "config") pod "3c14515f-ee0e-4560-bed2-7ef5160b61ec" (UID: "3c14515f-ee0e-4560-bed2-7ef5160b61ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.406772 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0c1456f-b18f-4c71-a1f8-319ec8b012a1" (UID: "e0c1456f-b18f-4c71-a1f8-319ec8b012a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.449674 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0c1456f-b18f-4c71-a1f8-319ec8b012a1" (UID: "e0c1456f-b18f-4c71-a1f8-319ec8b012a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.450451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2" (OuterVolumeSpecName: "kube-api-access-gmrx2") pod "e0c1456f-b18f-4c71-a1f8-319ec8b012a1" (UID: "e0c1456f-b18f-4c71-a1f8-319ec8b012a1"). InnerVolumeSpecName "kube-api-access-gmrx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.450859 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c14515f-ee0e-4560-bed2-7ef5160b61ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c14515f-ee0e-4560-bed2-7ef5160b61ec" (UID: "3c14515f-ee0e-4560-bed2-7ef5160b61ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.454226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr" (OuterVolumeSpecName: "kube-api-access-sksfr") pod "3c14515f-ee0e-4560-bed2-7ef5160b61ec" (UID: "3c14515f-ee0e-4560-bed2-7ef5160b61ec"). InnerVolumeSpecName "kube-api-access-sksfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.503954 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmrx2\" (UniqueName: \"kubernetes.io/projected/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-kube-api-access-gmrx2\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.503995 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504008 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504020 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-client-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504033 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504046 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sksfr\" (UniqueName: \"kubernetes.io/projected/3c14515f-ee0e-4560-bed2-7ef5160b61ec-kube-api-access-sksfr\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504057 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c14515f-ee0e-4560-bed2-7ef5160b61ec-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504068 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0c1456f-b18f-4c71-a1f8-319ec8b012a1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.504078 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c14515f-ee0e-4560-bed2-7ef5160b61ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.651898 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm"] Dec 27 05:48:55 crc kubenswrapper[4760]: E1227 05:48:55.652206 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c1456f-b18f-4c71-a1f8-319ec8b012a1" containerName="route-controller-manager" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.652218 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c1456f-b18f-4c71-a1f8-319ec8b012a1" containerName="route-controller-manager" Dec 27 05:48:55 crc kubenswrapper[4760]: E1227 05:48:55.652231 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" containerName="controller-manager" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.652238 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" containerName="controller-manager" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.652352 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c1456f-b18f-4c71-a1f8-319ec8b012a1" containerName="route-controller-manager" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.652372 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" containerName="controller-manager" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.654440 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.660517 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.661529 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.666217 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.670634 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.806523 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e68a847d-8b6b-447c-9d7a-b941f019c37e-serving-cert\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.806589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqsv\" (UniqueName: \"kubernetes.io/projected/e68a847d-8b6b-447c-9d7a-b941f019c37e-kube-api-access-bxqsv\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.806626 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-config\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.806655 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d85ca9-4663-4471-8a06-30176e98e676-serving-cert\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.807369 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-proxy-ca-bundles\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.807642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-client-ca\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.807728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-config\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.807767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-client-ca\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.807887 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4jb\" (UniqueName: \"kubernetes.io/projected/74d85ca9-4663-4471-8a06-30176e98e676-kube-api-access-nd4jb\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.862471 4760 generic.go:334] "Generic (PLEG): container finished" podID="e0c1456f-b18f-4c71-a1f8-319ec8b012a1" containerID="79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356" exitCode=0 Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.862548 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" event={"ID":"e0c1456f-b18f-4c71-a1f8-319ec8b012a1","Type":"ContainerDied","Data":"79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356"} Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.862589 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.862615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2" event={"ID":"e0c1456f-b18f-4c71-a1f8-319ec8b012a1","Type":"ContainerDied","Data":"ec667264aad70862b0a789c531bcfdec75cdc6cb6e545d11e5870a4865537522"} Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.862637 4760 scope.go:117] "RemoveContainer" containerID="79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.864312 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" event={"ID":"3c14515f-ee0e-4560-bed2-7ef5160b61ec","Type":"ContainerDied","Data":"a0ea7b703b631f121bd8c7d1f0ee9339895906a53e0e8ac1cdfab64222191981"} Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.864449 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zl9wx" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.883751 4760 scope.go:117] "RemoveContainer" containerID="79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356" Dec 27 05:48:55 crc kubenswrapper[4760]: E1227 05:48:55.884259 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356\": container with ID starting with 79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356 not found: ID does not exist" containerID="79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.884285 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356"} err="failed to get container status \"79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356\": rpc error: code = NotFound desc = could not find container \"79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356\": container with ID starting with 79b6398247fd45ec61b6fa81116acb22677f2c83a5ad7afc3d82674fddc14356 not found: ID does not exist" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.884307 4760 scope.go:117] "RemoveContainer" containerID="34715e22af8532949887ebed877337c18deb39cdb9fcd87307a27aca6db733c6" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.888599 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zl9wx"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.895881 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zl9wx"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.903161 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-client-ca\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909328 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-config\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909401 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-client-ca\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909496 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd4jb\" (UniqueName: \"kubernetes.io/projected/74d85ca9-4663-4471-8a06-30176e98e676-kube-api-access-nd4jb\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e68a847d-8b6b-447c-9d7a-b941f019c37e-serving-cert\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909641 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqsv\" (UniqueName: \"kubernetes.io/projected/e68a847d-8b6b-447c-9d7a-b941f019c37e-kube-api-access-bxqsv\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-config\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909764 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d85ca9-4663-4471-8a06-30176e98e676-serving-cert\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.909824 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-proxy-ca-bundles\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.910317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-client-ca\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.912015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-proxy-ca-bundles\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.912540 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-client-ca\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.913992 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpcr2"] Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.914659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e68a847d-8b6b-447c-9d7a-b941f019c37e-serving-cert\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.916295 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68a847d-8b6b-447c-9d7a-b941f019c37e-config\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.916541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d85ca9-4663-4471-8a06-30176e98e676-serving-cert\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.917170 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-config\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.933150 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd4jb\" (UniqueName: \"kubernetes.io/projected/74d85ca9-4663-4471-8a06-30176e98e676-kube-api-access-nd4jb\") pod \"route-controller-manager-7c7f6d8788-sghjm\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.936177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqsv\" (UniqueName: \"kubernetes.io/projected/e68a847d-8b6b-447c-9d7a-b941f019c37e-kube-api-access-bxqsv\") pod \"controller-manager-5659d5bb4b-d5bjm\" (UID: \"e68a847d-8b6b-447c-9d7a-b941f019c37e\") " pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:55 crc kubenswrapper[4760]: I1227 05:48:55.984835 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.002224 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.185559 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm"] Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.224544 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm"] Dec 27 05:48:56 crc kubenswrapper[4760]: W1227 05:48:56.227848 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode68a847d_8b6b_447c_9d7a_b941f019c37e.slice/crio-bd5ad3beeeede758f1b12067f32fd027733d5a29e87f37bd27b3d5dc9a11c5e8 WatchSource:0}: Error finding container bd5ad3beeeede758f1b12067f32fd027733d5a29e87f37bd27b3d5dc9a11c5e8: Status 404 returned error can't find the container with id bd5ad3beeeede758f1b12067f32fd027733d5a29e87f37bd27b3d5dc9a11c5e8 Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.871181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" event={"ID":"74d85ca9-4663-4471-8a06-30176e98e676","Type":"ContainerStarted","Data":"682f69202616dd07070176f206bd8ab2d1a634535c04286185dd899c07aec751"} Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.871239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" event={"ID":"74d85ca9-4663-4471-8a06-30176e98e676","Type":"ContainerStarted","Data":"d250536d5941a99cec8b5571df6fd435b2fecbc2c304b79a7488f3fdb5e4332a"} Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.871580 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.873675 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" event={"ID":"e68a847d-8b6b-447c-9d7a-b941f019c37e","Type":"ContainerStarted","Data":"b5f0a522eb4fd637cb356e640e9fa992b77134e2987ba86f600060bd035822d7"} Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.873737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" event={"ID":"e68a847d-8b6b-447c-9d7a-b941f019c37e","Type":"ContainerStarted","Data":"bd5ad3beeeede758f1b12067f32fd027733d5a29e87f37bd27b3d5dc9a11c5e8"} Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.873880 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.877351 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.878423 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.887631 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" podStartSLOduration=2.8876134650000003 podStartE2EDuration="2.887613465s" podCreationTimestamp="2025-12-27 05:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:48:56.887029312 +0000 UTC m=+259.647098647" watchObservedRunningTime="2025-12-27 05:48:56.887613465 +0000 UTC m=+259.647682780" Dec 27 05:48:56 crc kubenswrapper[4760]: I1227 05:48:56.929950 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5659d5bb4b-d5bjm" podStartSLOduration=2.9299255349999997 podStartE2EDuration="2.929925535s" podCreationTimestamp="2025-12-27 05:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:48:56.904520785 +0000 UTC m=+259.664590100" watchObservedRunningTime="2025-12-27 05:48:56.929925535 +0000 UTC m=+259.689994860" Dec 27 05:48:57 crc kubenswrapper[4760]: I1227 05:48:57.511010 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c14515f-ee0e-4560-bed2-7ef5160b61ec" path="/var/lib/kubelet/pods/3c14515f-ee0e-4560-bed2-7ef5160b61ec/volumes" Dec 27 05:48:57 crc kubenswrapper[4760]: I1227 05:48:57.512163 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c1456f-b18f-4c71-a1f8-319ec8b012a1" path="/var/lib/kubelet/pods/e0c1456f-b18f-4c71-a1f8-319ec8b012a1/volumes" Dec 27 05:49:14 crc kubenswrapper[4760]: I1227 05:49:14.866780 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xz5fn"] Dec 27 05:49:14 crc kubenswrapper[4760]: I1227 05:49:14.867609 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xz5fn" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="registry-server" containerID="cri-o://956517f983e1582927a6a1bb77f79377f347c3e3e9124e7b62db94d8798278c4" gracePeriod=2 Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:14.997342 4760 generic.go:334] "Generic (PLEG): container finished" podID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerID="956517f983e1582927a6a1bb77f79377f347c3e3e9124e7b62db94d8798278c4" exitCode=0 Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:14.997377 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerDied","Data":"956517f983e1582927a6a1bb77f79377f347c3e3e9124e7b62db94d8798278c4"} Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.460709 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.572880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-utilities\") pod \"dad6d9ba-2049-4d43-a786-9ce87644643f\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.573024 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-catalog-content\") pod \"dad6d9ba-2049-4d43-a786-9ce87644643f\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.573060 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlk8v\" (UniqueName: \"kubernetes.io/projected/dad6d9ba-2049-4d43-a786-9ce87644643f-kube-api-access-nlk8v\") pod \"dad6d9ba-2049-4d43-a786-9ce87644643f\" (UID: \"dad6d9ba-2049-4d43-a786-9ce87644643f\") " Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.574196 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-utilities" (OuterVolumeSpecName: "utilities") pod "dad6d9ba-2049-4d43-a786-9ce87644643f" (UID: "dad6d9ba-2049-4d43-a786-9ce87644643f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.586935 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad6d9ba-2049-4d43-a786-9ce87644643f-kube-api-access-nlk8v" (OuterVolumeSpecName: "kube-api-access-nlk8v") pod "dad6d9ba-2049-4d43-a786-9ce87644643f" (UID: "dad6d9ba-2049-4d43-a786-9ce87644643f"). InnerVolumeSpecName "kube-api-access-nlk8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.674444 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.674720 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlk8v\" (UniqueName: \"kubernetes.io/projected/dad6d9ba-2049-4d43-a786-9ce87644643f-kube-api-access-nlk8v\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.698893 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad6d9ba-2049-4d43-a786-9ce87644643f" (UID: "dad6d9ba-2049-4d43-a786-9ce87644643f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:49:15 crc kubenswrapper[4760]: I1227 05:49:15.775763 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad6d9ba-2049-4d43-a786-9ce87644643f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.004644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz5fn" event={"ID":"dad6d9ba-2049-4d43-a786-9ce87644643f","Type":"ContainerDied","Data":"8a758f6b925b34344593fe40e5afae6b2a90ac28321ed22a7c8a373d4ef5f007"} Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.004692 4760 scope.go:117] "RemoveContainer" containerID="956517f983e1582927a6a1bb77f79377f347c3e3e9124e7b62db94d8798278c4" Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.004799 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz5fn" Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.027140 4760 scope.go:117] "RemoveContainer" containerID="ae577f4199318e7f29f75a78f7a1872f635cf644da91649680d8a5fece1da4de" Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.038206 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xz5fn"] Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.041361 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xz5fn"] Dec 27 05:49:16 crc kubenswrapper[4760]: I1227 05:49:16.063336 4760 scope.go:117] "RemoveContainer" containerID="51c78926e7a0bb86ddb65865bdc10b112414d3468e8223de99fe18d93594072e" Dec 27 05:49:17 crc kubenswrapper[4760]: I1227 05:49:17.514031 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" path="/var/lib/kubelet/pods/dad6d9ba-2049-4d43-a786-9ce87644643f/volumes" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.586460 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l7r7v"] Dec 27 05:49:33 crc kubenswrapper[4760]: E1227 05:49:33.587288 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="extract-utilities" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.587305 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="extract-utilities" Dec 27 05:49:33 crc kubenswrapper[4760]: E1227 05:49:33.587319 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="registry-server" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.587328 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="registry-server" Dec 27 05:49:33 crc kubenswrapper[4760]: E1227 05:49:33.587340 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="extract-content" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.587349 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="extract-content" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.587468 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad6d9ba-2049-4d43-a786-9ce87644643f" containerName="registry-server" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.587911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.610989 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l7r7v"] Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b718e411-3aa2-43ec-b95f-b2542cf945a9-trusted-ca\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727423 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b718e411-3aa2-43ec-b95f-b2542cf945a9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzssx\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-kube-api-access-rzssx\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b718e411-3aa2-43ec-b95f-b2542cf945a9-registry-certificates\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727733 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-bound-sa-token\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b718e411-3aa2-43ec-b95f-b2542cf945a9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.727791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-registry-tls\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.758932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzssx\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-kube-api-access-rzssx\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b718e411-3aa2-43ec-b95f-b2542cf945a9-registry-certificates\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-bound-sa-token\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b718e411-3aa2-43ec-b95f-b2542cf945a9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829443 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-registry-tls\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b718e411-3aa2-43ec-b95f-b2542cf945a9-trusted-ca\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.829485 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b718e411-3aa2-43ec-b95f-b2542cf945a9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.830931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b718e411-3aa2-43ec-b95f-b2542cf945a9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.831697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b718e411-3aa2-43ec-b95f-b2542cf945a9-registry-certificates\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.831706 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b718e411-3aa2-43ec-b95f-b2542cf945a9-trusted-ca\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.837947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b718e411-3aa2-43ec-b95f-b2542cf945a9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.845039 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-registry-tls\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.853172 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-bound-sa-token\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.856009 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzssx\" (UniqueName: \"kubernetes.io/projected/b718e411-3aa2-43ec-b95f-b2542cf945a9-kube-api-access-rzssx\") pod \"image-registry-66df7c8f76-l7r7v\" (UID: \"b718e411-3aa2-43ec-b95f-b2542cf945a9\") " pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:33 crc kubenswrapper[4760]: I1227 05:49:33.911773 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:34 crc kubenswrapper[4760]: I1227 05:49:34.307717 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l7r7v"] Dec 27 05:49:34 crc kubenswrapper[4760]: I1227 05:49:34.402755 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm"] Dec 27 05:49:34 crc kubenswrapper[4760]: I1227 05:49:34.402997 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" podUID="74d85ca9-4663-4471-8a06-30176e98e676" containerName="route-controller-manager" containerID="cri-o://682f69202616dd07070176f206bd8ab2d1a634535c04286185dd899c07aec751" gracePeriod=30 Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.114259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" event={"ID":"b718e411-3aa2-43ec-b95f-b2542cf945a9","Type":"ContainerStarted","Data":"89002aa41afda406899feb40beae9fc73ef6bec1114450a939569c51a0f7c74a"} Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.114618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" event={"ID":"b718e411-3aa2-43ec-b95f-b2542cf945a9","Type":"ContainerStarted","Data":"c1f311f7ec45f12d270b37512374f6e2b098cae59b2922327e6b1cfa1ba8d3d9"} Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.114640 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.116513 4760 generic.go:334] "Generic (PLEG): container finished" podID="74d85ca9-4663-4471-8a06-30176e98e676" containerID="682f69202616dd07070176f206bd8ab2d1a634535c04286185dd899c07aec751" exitCode=0 Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.116585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" event={"ID":"74d85ca9-4663-4471-8a06-30176e98e676","Type":"ContainerDied","Data":"682f69202616dd07070176f206bd8ab2d1a634535c04286185dd899c07aec751"} Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.137076 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" podStartSLOduration=2.137055364 podStartE2EDuration="2.137055364s" podCreationTimestamp="2025-12-27 05:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:49:35.134578774 +0000 UTC m=+297.894648089" watchObservedRunningTime="2025-12-27 05:49:35.137055364 +0000 UTC m=+297.897124679" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.541810 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.584464 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8"] Dec 27 05:49:35 crc kubenswrapper[4760]: E1227 05:49:35.584863 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d85ca9-4663-4471-8a06-30176e98e676" containerName="route-controller-manager" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.584900 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d85ca9-4663-4471-8a06-30176e98e676" containerName="route-controller-manager" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.585248 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d85ca9-4663-4471-8a06-30176e98e676" containerName="route-controller-manager" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.586014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.595837 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8"] Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.651032 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d85ca9-4663-4471-8a06-30176e98e676-serving-cert\") pod \"74d85ca9-4663-4471-8a06-30176e98e676\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.651151 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd4jb\" (UniqueName: \"kubernetes.io/projected/74d85ca9-4663-4471-8a06-30176e98e676-kube-api-access-nd4jb\") pod \"74d85ca9-4663-4471-8a06-30176e98e676\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.651223 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-client-ca\") pod \"74d85ca9-4663-4471-8a06-30176e98e676\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.651273 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-config\") pod \"74d85ca9-4663-4471-8a06-30176e98e676\" (UID: \"74d85ca9-4663-4471-8a06-30176e98e676\") " Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.652152 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-client-ca" (OuterVolumeSpecName: "client-ca") pod "74d85ca9-4663-4471-8a06-30176e98e676" (UID: "74d85ca9-4663-4471-8a06-30176e98e676"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.652666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-config" (OuterVolumeSpecName: "config") pod "74d85ca9-4663-4471-8a06-30176e98e676" (UID: "74d85ca9-4663-4471-8a06-30176e98e676"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.656764 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d85ca9-4663-4471-8a06-30176e98e676-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74d85ca9-4663-4471-8a06-30176e98e676" (UID: "74d85ca9-4663-4471-8a06-30176e98e676"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.661074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d85ca9-4663-4471-8a06-30176e98e676-kube-api-access-nd4jb" (OuterVolumeSpecName: "kube-api-access-nd4jb") pod "74d85ca9-4663-4471-8a06-30176e98e676" (UID: "74d85ca9-4663-4471-8a06-30176e98e676"). InnerVolumeSpecName "kube-api-access-nd4jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.752977 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvqs\" (UniqueName: \"kubernetes.io/projected/65be11ec-44e2-4506-90ea-eac6aee86f89-kube-api-access-gvvqs\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65be11ec-44e2-4506-90ea-eac6aee86f89-config\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753099 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65be11ec-44e2-4506-90ea-eac6aee86f89-client-ca\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753148 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65be11ec-44e2-4506-90ea-eac6aee86f89-serving-cert\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753192 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753203 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74d85ca9-4663-4471-8a06-30176e98e676-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753214 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd4jb\" (UniqueName: \"kubernetes.io/projected/74d85ca9-4663-4471-8a06-30176e98e676-kube-api-access-nd4jb\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.753227 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74d85ca9-4663-4471-8a06-30176e98e676-client-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.854747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65be11ec-44e2-4506-90ea-eac6aee86f89-serving-cert\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.854820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvqs\" (UniqueName: \"kubernetes.io/projected/65be11ec-44e2-4506-90ea-eac6aee86f89-kube-api-access-gvvqs\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.854879 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65be11ec-44e2-4506-90ea-eac6aee86f89-config\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.854930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65be11ec-44e2-4506-90ea-eac6aee86f89-client-ca\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.855950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65be11ec-44e2-4506-90ea-eac6aee86f89-client-ca\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.856182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65be11ec-44e2-4506-90ea-eac6aee86f89-config\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.858803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65be11ec-44e2-4506-90ea-eac6aee86f89-serving-cert\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.872172 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvqs\" (UniqueName: \"kubernetes.io/projected/65be11ec-44e2-4506-90ea-eac6aee86f89-kube-api-access-gvvqs\") pod \"route-controller-manager-67c8c8d88f-z9jx8\" (UID: \"65be11ec-44e2-4506-90ea-eac6aee86f89\") " pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:35 crc kubenswrapper[4760]: I1227 05:49:35.907728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:36 crc kubenswrapper[4760]: I1227 05:49:36.125984 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" Dec 27 05:49:36 crc kubenswrapper[4760]: I1227 05:49:36.125975 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm" event={"ID":"74d85ca9-4663-4471-8a06-30176e98e676","Type":"ContainerDied","Data":"d250536d5941a99cec8b5571df6fd435b2fecbc2c304b79a7488f3fdb5e4332a"} Dec 27 05:49:36 crc kubenswrapper[4760]: I1227 05:49:36.126047 4760 scope.go:117] "RemoveContainer" containerID="682f69202616dd07070176f206bd8ab2d1a634535c04286185dd899c07aec751" Dec 27 05:49:36 crc kubenswrapper[4760]: I1227 05:49:36.152915 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm"] Dec 27 05:49:36 crc kubenswrapper[4760]: I1227 05:49:36.156121 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-sghjm"] Dec 27 05:49:36 crc kubenswrapper[4760]: I1227 05:49:36.283246 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8"] Dec 27 05:49:36 crc kubenswrapper[4760]: W1227 05:49:36.290567 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65be11ec_44e2_4506_90ea_eac6aee86f89.slice/crio-61de9c82a17736385b8d8bfdb370e719710aef7a93234867d22eb61e5ac053aa WatchSource:0}: Error finding container 61de9c82a17736385b8d8bfdb370e719710aef7a93234867d22eb61e5ac053aa: Status 404 returned error can't find the container with id 61de9c82a17736385b8d8bfdb370e719710aef7a93234867d22eb61e5ac053aa Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.132548 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" event={"ID":"65be11ec-44e2-4506-90ea-eac6aee86f89","Type":"ContainerStarted","Data":"d222034f2f3cd0083899f6c98f3308546c9ce80c972f7d3a83676f8557f9e25e"} Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.132906 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" event={"ID":"65be11ec-44e2-4506-90ea-eac6aee86f89","Type":"ContainerStarted","Data":"61de9c82a17736385b8d8bfdb370e719710aef7a93234867d22eb61e5ac053aa"} Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.132928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.277511 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.298917 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67c8c8d88f-z9jx8" podStartSLOduration=3.298895231 podStartE2EDuration="3.298895231s" podCreationTimestamp="2025-12-27 05:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:49:37.152307063 +0000 UTC m=+299.912376378" watchObservedRunningTime="2025-12-27 05:49:37.298895231 +0000 UTC m=+300.058964566" Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.368406 4760 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 27 05:49:37 crc kubenswrapper[4760]: I1227 05:49:37.510217 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d85ca9-4663-4471-8a06-30176e98e676" path="/var/lib/kubelet/pods/74d85ca9-4663-4471-8a06-30176e98e676/volumes" Dec 27 05:49:53 crc kubenswrapper[4760]: I1227 05:49:53.925564 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-l7r7v" Dec 27 05:49:53 crc kubenswrapper[4760]: I1227 05:49:53.992283 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzs8s"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.358354 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lszcc"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.361259 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lszcc" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="registry-server" containerID="cri-o://d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa" gracePeriod=30 Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.366086 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xqm68"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.366552 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xqm68" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="registry-server" containerID="cri-o://b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089" gracePeriod=30 Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.380011 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w927h"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.380280 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" podUID="2f024673-515b-451c-b19c-f542b4cebba9" containerName="marketplace-operator" containerID="cri-o://b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea" gracePeriod=30 Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.387374 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr26d"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.387672 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mr26d" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="registry-server" containerID="cri-o://cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60" gracePeriod=30 Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.411492 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zhrlh"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.412774 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.417772 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7msw2"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.418311 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7msw2" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="registry-server" containerID="cri-o://2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02" gracePeriod=30 Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.424217 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zhrlh"] Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.509144 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrggk\" (UniqueName: \"kubernetes.io/projected/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-kube-api-access-rrggk\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.509215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.509255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.610791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.610877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.610903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrggk\" (UniqueName: \"kubernetes.io/projected/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-kube-api-access-rrggk\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.615772 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.621015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.629045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrggk\" (UniqueName: \"kubernetes.io/projected/a8049c10-25cf-46ee-b24a-42b5e5af0d6a-kube-api-access-rrggk\") pod \"marketplace-operator-79b997595-zhrlh\" (UID: \"a8049c10-25cf-46ee-b24a-42b5e5af0d6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.781817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.787403 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.830459 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.870043 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.870986 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.871005 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.914176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-catalog-content\") pod \"277187d7-c71a-4583-8d65-2e713e20557d\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.914243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvfs\" (UniqueName: \"kubernetes.io/projected/277187d7-c71a-4583-8d65-2e713e20557d-kube-api-access-hgvfs\") pod \"277187d7-c71a-4583-8d65-2e713e20557d\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.914261 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qmq6\" (UniqueName: \"kubernetes.io/projected/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-kube-api-access-8qmq6\") pod \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.914325 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-utilities\") pod \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.914347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-utilities\") pod \"277187d7-c71a-4583-8d65-2e713e20557d\" (UID: \"277187d7-c71a-4583-8d65-2e713e20557d\") " Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.914385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-catalog-content\") pod \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\" (UID: \"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1\") " Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.917769 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-utilities" (OuterVolumeSpecName: "utilities") pod "277187d7-c71a-4583-8d65-2e713e20557d" (UID: "277187d7-c71a-4583-8d65-2e713e20557d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.918803 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-kube-api-access-8qmq6" (OuterVolumeSpecName: "kube-api-access-8qmq6") pod "89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" (UID: "89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1"). InnerVolumeSpecName "kube-api-access-8qmq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.918876 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-utilities" (OuterVolumeSpecName: "utilities") pod "89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" (UID: "89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.920400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277187d7-c71a-4583-8d65-2e713e20557d-kube-api-access-hgvfs" (OuterVolumeSpecName: "kube-api-access-hgvfs") pod "277187d7-c71a-4583-8d65-2e713e20557d" (UID: "277187d7-c71a-4583-8d65-2e713e20557d"). InnerVolumeSpecName "kube-api-access-hgvfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.951790 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" (UID: "89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:08 crc kubenswrapper[4760]: I1227 05:50:08.976701 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "277187d7-c71a-4583-8d65-2e713e20557d" (UID: "277187d7-c71a-4583-8d65-2e713e20557d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015599 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-catalog-content\") pod \"55c01569-3cd2-4c5f-9039-b61176dac0f3\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015638 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-catalog-content\") pod \"4b5d003c-9d11-417b-aafd-19fde5a27981\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015670 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-trusted-ca\") pod \"2f024673-515b-451c-b19c-f542b4cebba9\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015712 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85hv\" (UniqueName: \"kubernetes.io/projected/55c01569-3cd2-4c5f-9039-b61176dac0f3-kube-api-access-b85hv\") pod \"55c01569-3cd2-4c5f-9039-b61176dac0f3\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hktl7\" (UniqueName: \"kubernetes.io/projected/4b5d003c-9d11-417b-aafd-19fde5a27981-kube-api-access-hktl7\") pod \"4b5d003c-9d11-417b-aafd-19fde5a27981\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-utilities\") pod \"55c01569-3cd2-4c5f-9039-b61176dac0f3\" (UID: \"55c01569-3cd2-4c5f-9039-b61176dac0f3\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015785 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-utilities\") pod \"4b5d003c-9d11-417b-aafd-19fde5a27981\" (UID: \"4b5d003c-9d11-417b-aafd-19fde5a27981\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015807 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp55s\" (UniqueName: \"kubernetes.io/projected/2f024673-515b-451c-b19c-f542b4cebba9-kube-api-access-hp55s\") pod \"2f024673-515b-451c-b19c-f542b4cebba9\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.015836 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-operator-metrics\") pod \"2f024673-515b-451c-b19c-f542b4cebba9\" (UID: \"2f024673-515b-451c-b19c-f542b4cebba9\") " Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.016072 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.016082 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgvfs\" (UniqueName: \"kubernetes.io/projected/277187d7-c71a-4583-8d65-2e713e20557d-kube-api-access-hgvfs\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.016142 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qmq6\" (UniqueName: \"kubernetes.io/projected/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-kube-api-access-8qmq6\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.016155 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.016163 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277187d7-c71a-4583-8d65-2e713e20557d-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.016170 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.019833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5d003c-9d11-417b-aafd-19fde5a27981-kube-api-access-hktl7" (OuterVolumeSpecName: "kube-api-access-hktl7") pod "4b5d003c-9d11-417b-aafd-19fde5a27981" (UID: "4b5d003c-9d11-417b-aafd-19fde5a27981"). InnerVolumeSpecName "kube-api-access-hktl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.019859 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2f024673-515b-451c-b19c-f542b4cebba9" (UID: "2f024673-515b-451c-b19c-f542b4cebba9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.020319 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2f024673-515b-451c-b19c-f542b4cebba9" (UID: "2f024673-515b-451c-b19c-f542b4cebba9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.020470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-utilities" (OuterVolumeSpecName: "utilities") pod "55c01569-3cd2-4c5f-9039-b61176dac0f3" (UID: "55c01569-3cd2-4c5f-9039-b61176dac0f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.021134 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-utilities" (OuterVolumeSpecName: "utilities") pod "4b5d003c-9d11-417b-aafd-19fde5a27981" (UID: "4b5d003c-9d11-417b-aafd-19fde5a27981"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.022871 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c01569-3cd2-4c5f-9039-b61176dac0f3-kube-api-access-b85hv" (OuterVolumeSpecName: "kube-api-access-b85hv") pod "55c01569-3cd2-4c5f-9039-b61176dac0f3" (UID: "55c01569-3cd2-4c5f-9039-b61176dac0f3"). InnerVolumeSpecName "kube-api-access-b85hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.022945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f024673-515b-451c-b19c-f542b4cebba9-kube-api-access-hp55s" (OuterVolumeSpecName: "kube-api-access-hp55s") pod "2f024673-515b-451c-b19c-f542b4cebba9" (UID: "2f024673-515b-451c-b19c-f542b4cebba9"). InnerVolumeSpecName "kube-api-access-hp55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.061787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b5d003c-9d11-417b-aafd-19fde5a27981" (UID: "4b5d003c-9d11-417b-aafd-19fde5a27981"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117653 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117708 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117725 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b85hv\" (UniqueName: \"kubernetes.io/projected/55c01569-3cd2-4c5f-9039-b61176dac0f3-kube-api-access-b85hv\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117739 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hktl7\" (UniqueName: \"kubernetes.io/projected/4b5d003c-9d11-417b-aafd-19fde5a27981-kube-api-access-hktl7\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117751 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117764 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5d003c-9d11-417b-aafd-19fde5a27981-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117775 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp55s\" (UniqueName: \"kubernetes.io/projected/2f024673-515b-451c-b19c-f542b4cebba9-kube-api-access-hp55s\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.117786 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2f024673-515b-451c-b19c-f542b4cebba9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.147450 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55c01569-3cd2-4c5f-9039-b61176dac0f3" (UID: "55c01569-3cd2-4c5f-9039-b61176dac0f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.219305 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55c01569-3cd2-4c5f-9039-b61176dac0f3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.248367 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zhrlh"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.318354 4760 generic.go:334] "Generic (PLEG): container finished" podID="277187d7-c71a-4583-8d65-2e713e20557d" containerID="b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089" exitCode=0 Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.318432 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqm68" event={"ID":"277187d7-c71a-4583-8d65-2e713e20557d","Type":"ContainerDied","Data":"b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.318466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqm68" event={"ID":"277187d7-c71a-4583-8d65-2e713e20557d","Type":"ContainerDied","Data":"9cc5938c55c506f4c114a0d5f42c2d50b2be51dce08d25210e8743a920ecfd3b"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.318483 4760 scope.go:117] "RemoveContainer" containerID="b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.318628 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqm68" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.319588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" event={"ID":"a8049c10-25cf-46ee-b24a-42b5e5af0d6a","Type":"ContainerStarted","Data":"116b57f60c82e530a58f8cf971e4a95de66b710afa2018e5fddecf6aabdd218c"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.322641 4760 generic.go:334] "Generic (PLEG): container finished" podID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerID="d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa" exitCode=0 Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.322694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerDied","Data":"d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.322720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lszcc" event={"ID":"4b5d003c-9d11-417b-aafd-19fde5a27981","Type":"ContainerDied","Data":"108c4f3650febb6f045863a29717bd14f2bcc084a927f3ae1eb077191c9ae272"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.322778 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lszcc" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.326654 4760 generic.go:334] "Generic (PLEG): container finished" podID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerID="cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60" exitCode=0 Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.326715 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr26d" event={"ID":"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1","Type":"ContainerDied","Data":"cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.326742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr26d" event={"ID":"89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1","Type":"ContainerDied","Data":"6ed4a2eea571a9c1aba22cf422f2d2816706954a3222e0d643d8ac76162e8702"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.326797 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr26d" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.331128 4760 generic.go:334] "Generic (PLEG): container finished" podID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerID="2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02" exitCode=0 Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.331171 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7msw2" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.331170 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerDied","Data":"2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.331314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7msw2" event={"ID":"55c01569-3cd2-4c5f-9039-b61176dac0f3","Type":"ContainerDied","Data":"fed0264430a07eae7a46ee4d6d8d7f911aa5d60a2ad67a0072f471aceabf782f"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.333701 4760 generic.go:334] "Generic (PLEG): container finished" podID="2f024673-515b-451c-b19c-f542b4cebba9" containerID="b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea" exitCode=0 Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.333737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" event={"ID":"2f024673-515b-451c-b19c-f542b4cebba9","Type":"ContainerDied","Data":"b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.333756 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" event={"ID":"2f024673-515b-451c-b19c-f542b4cebba9","Type":"ContainerDied","Data":"2d5ba775ade53d208f8e421bf245f5f5cda1691f5e52be1cc59381fde4b0f89c"} Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.333810 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w927h" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.337797 4760 scope.go:117] "RemoveContainer" containerID="1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.353686 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xqm68"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.356832 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xqm68"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.367607 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr26d"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.369514 4760 scope.go:117] "RemoveContainer" containerID="87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.374590 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr26d"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.380138 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7msw2"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.384435 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7msw2"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.388241 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w927h"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.395300 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w927h"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.398678 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lszcc"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.402356 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lszcc"] Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.402378 4760 scope.go:117] "RemoveContainer" containerID="b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.402818 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089\": container with ID starting with b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089 not found: ID does not exist" containerID="b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.402845 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089"} err="failed to get container status \"b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089\": rpc error: code = NotFound desc = could not find container \"b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089\": container with ID starting with b6a9de9f71dc3d0e339e3859de933f1498f6410e6a0c4a447f83667bb6684089 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.402867 4760 scope.go:117] "RemoveContainer" containerID="1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.403192 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a\": container with ID starting with 1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a not found: ID does not exist" containerID="1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.403214 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a"} err="failed to get container status \"1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a\": rpc error: code = NotFound desc = could not find container \"1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a\": container with ID starting with 1f1fb51426a7d89f32ddde43eb3428a7ead38e88a05c1bef65ec515dbcea4a8a not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.403228 4760 scope.go:117] "RemoveContainer" containerID="87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.403459 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca\": container with ID starting with 87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca not found: ID does not exist" containerID="87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.403486 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca"} err="failed to get container status \"87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca\": rpc error: code = NotFound desc = could not find container \"87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca\": container with ID starting with 87bbbc4a924633fac60b85e5e666e81124d050bba11c1b4da772850a3b6518ca not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.403505 4760 scope.go:117] "RemoveContainer" containerID="d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.416986 4760 scope.go:117] "RemoveContainer" containerID="abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.429342 4760 scope.go:117] "RemoveContainer" containerID="514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.441314 4760 scope.go:117] "RemoveContainer" containerID="d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.441592 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa\": container with ID starting with d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa not found: ID does not exist" containerID="d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.441623 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa"} err="failed to get container status \"d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa\": rpc error: code = NotFound desc = could not find container \"d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa\": container with ID starting with d68865fab42c77f298c74b142535642bb568f10fca2b5616d3bd1e46f5b19cfa not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.441643 4760 scope.go:117] "RemoveContainer" containerID="abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.441821 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863\": container with ID starting with abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863 not found: ID does not exist" containerID="abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.441839 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863"} err="failed to get container status \"abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863\": rpc error: code = NotFound desc = could not find container \"abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863\": container with ID starting with abe6e183e1498fc04e19548ccaa473403e9ab9e6f7d0e11c2dcafa0c3e830863 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.441851 4760 scope.go:117] "RemoveContainer" containerID="514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.442014 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18\": container with ID starting with 514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18 not found: ID does not exist" containerID="514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.442034 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18"} err="failed to get container status \"514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18\": rpc error: code = NotFound desc = could not find container \"514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18\": container with ID starting with 514870c8d015164540deba94de6ee25ecf4fed35a4eeff21a643f159d0238b18 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.442046 4760 scope.go:117] "RemoveContainer" containerID="cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.451702 4760 scope.go:117] "RemoveContainer" containerID="bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.461084 4760 scope.go:117] "RemoveContainer" containerID="381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.472424 4760 scope.go:117] "RemoveContainer" containerID="cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.472689 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60\": container with ID starting with cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60 not found: ID does not exist" containerID="cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.472720 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60"} err="failed to get container status \"cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60\": rpc error: code = NotFound desc = could not find container \"cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60\": container with ID starting with cc8bbfea7df506fd1696314300f522eb2d946dca056c3921fa60e7d200324f60 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.472742 4760 scope.go:117] "RemoveContainer" containerID="bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.472919 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064\": container with ID starting with bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064 not found: ID does not exist" containerID="bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.472946 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064"} err="failed to get container status \"bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064\": rpc error: code = NotFound desc = could not find container \"bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064\": container with ID starting with bf89bde39ea3efe37f8f7212f9a810accb945b939ea44643e998f6b6d3507064 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.472963 4760 scope.go:117] "RemoveContainer" containerID="381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.473187 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931\": container with ID starting with 381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931 not found: ID does not exist" containerID="381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.473216 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931"} err="failed to get container status \"381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931\": rpc error: code = NotFound desc = could not find container \"381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931\": container with ID starting with 381e968739089da9ac3ea6ca4b4e14fd50e290e7724163ac2a2e0b8915ce8931 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.473233 4760 scope.go:117] "RemoveContainer" containerID="2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.509805 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277187d7-c71a-4583-8d65-2e713e20557d" path="/var/lib/kubelet/pods/277187d7-c71a-4583-8d65-2e713e20557d/volumes" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.510725 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f024673-515b-451c-b19c-f542b4cebba9" path="/var/lib/kubelet/pods/2f024673-515b-451c-b19c-f542b4cebba9/volumes" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.511534 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" path="/var/lib/kubelet/pods/4b5d003c-9d11-417b-aafd-19fde5a27981/volumes" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.514505 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" path="/var/lib/kubelet/pods/55c01569-3cd2-4c5f-9039-b61176dac0f3/volumes" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.515106 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" path="/var/lib/kubelet/pods/89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1/volumes" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.520912 4760 scope.go:117] "RemoveContainer" containerID="fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.540044 4760 scope.go:117] "RemoveContainer" containerID="a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.553967 4760 scope.go:117] "RemoveContainer" containerID="2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.554345 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02\": container with ID starting with 2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02 not found: ID does not exist" containerID="2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.554383 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02"} err="failed to get container status \"2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02\": rpc error: code = NotFound desc = could not find container \"2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02\": container with ID starting with 2ed2f0e11f704c3684e25cfe5d2472c52463bd43aa53479e8273d055b1031e02 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.554414 4760 scope.go:117] "RemoveContainer" containerID="fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.554698 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608\": container with ID starting with fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608 not found: ID does not exist" containerID="fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.554755 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608"} err="failed to get container status \"fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608\": rpc error: code = NotFound desc = could not find container \"fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608\": container with ID starting with fea1b5310b68b70f9d441be0cbf3af0700828e95145270347c0119a06248c608 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.554794 4760 scope.go:117] "RemoveContainer" containerID="a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.555188 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5\": container with ID starting with a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5 not found: ID does not exist" containerID="a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.555250 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5"} err="failed to get container status \"a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5\": rpc error: code = NotFound desc = could not find container \"a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5\": container with ID starting with a192e6a47d0c84ff513bf8cd875c02f3ab5abbb5e7bd59a11e3f9d8d1bff72d5 not found: ID does not exist" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.555282 4760 scope.go:117] "RemoveContainer" containerID="b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.569497 4760 scope.go:117] "RemoveContainer" containerID="b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea" Dec 27 05:50:09 crc kubenswrapper[4760]: E1227 05:50:09.569842 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea\": container with ID starting with b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea not found: ID does not exist" containerID="b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea" Dec 27 05:50:09 crc kubenswrapper[4760]: I1227 05:50:09.569881 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea"} err="failed to get container status \"b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea\": rpc error: code = NotFound desc = could not find container \"b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea\": container with ID starting with b05c9c6bbaa5fe79c729237a0fdc191f6dadc17d946292a69082a9914698c6ea not found: ID does not exist" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573206 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-75pzc"] Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573712 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573729 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573740 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573748 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573757 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573769 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573779 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573787 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573796 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f024673-515b-451c-b19c-f542b4cebba9" containerName="marketplace-operator" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573804 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f024673-515b-451c-b19c-f542b4cebba9" containerName="marketplace-operator" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573815 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573823 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573834 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573841 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573851 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573860 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="extract-utilities" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573869 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573879 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573894 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573901 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573914 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573921 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573932 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573940 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: E1227 05:50:10.573951 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.573958 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="extract-content" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.574073 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f024673-515b-451c-b19c-f542b4cebba9" containerName="marketplace-operator" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.574108 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5d003c-9d11-417b-aafd-19fde5a27981" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.574123 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c3b09c-a1fb-452c-bea1-b8c85e6c4ab1" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.574141 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="277187d7-c71a-4583-8d65-2e713e20557d" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.574149 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c01569-3cd2-4c5f-9039-b61176dac0f3" containerName="registry-server" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.575129 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.577337 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.587260 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pzc"] Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.635664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33de9e2-91e8-425a-8250-5301c5aef450-catalog-content\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.635917 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33de9e2-91e8-425a-8250-5301c5aef450-utilities\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.636045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmg4s\" (UniqueName: \"kubernetes.io/projected/c33de9e2-91e8-425a-8250-5301c5aef450-kube-api-access-cmg4s\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.737877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmg4s\" (UniqueName: \"kubernetes.io/projected/c33de9e2-91e8-425a-8250-5301c5aef450-kube-api-access-cmg4s\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.737949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33de9e2-91e8-425a-8250-5301c5aef450-catalog-content\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.738057 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33de9e2-91e8-425a-8250-5301c5aef450-utilities\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.738723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33de9e2-91e8-425a-8250-5301c5aef450-utilities\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.739210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33de9e2-91e8-425a-8250-5301c5aef450-catalog-content\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.763357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmg4s\" (UniqueName: \"kubernetes.io/projected/c33de9e2-91e8-425a-8250-5301c5aef450-kube-api-access-cmg4s\") pod \"redhat-marketplace-75pzc\" (UID: \"c33de9e2-91e8-425a-8250-5301c5aef450\") " pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.772306 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5xqc"] Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.773369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.775863 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.783277 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5xqc"] Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.873455 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdl7\" (UniqueName: \"kubernetes.io/projected/8924ab71-d3e4-4709-a666-c70f96fe55a7-kube-api-access-npdl7\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.873514 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924ab71-d3e4-4709-a666-c70f96fe55a7-utilities\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.873550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924ab71-d3e4-4709-a666-c70f96fe55a7-catalog-content\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.896133 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.975223 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924ab71-d3e4-4709-a666-c70f96fe55a7-catalog-content\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.975285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdl7\" (UniqueName: \"kubernetes.io/projected/8924ab71-d3e4-4709-a666-c70f96fe55a7-kube-api-access-npdl7\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.975327 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924ab71-d3e4-4709-a666-c70f96fe55a7-utilities\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.976232 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8924ab71-d3e4-4709-a666-c70f96fe55a7-catalog-content\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.976385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8924ab71-d3e4-4709-a666-c70f96fe55a7-utilities\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:10 crc kubenswrapper[4760]: I1227 05:50:10.995989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdl7\" (UniqueName: \"kubernetes.io/projected/8924ab71-d3e4-4709-a666-c70f96fe55a7-kube-api-access-npdl7\") pod \"certified-operators-f5xqc\" (UID: \"8924ab71-d3e4-4709-a666-c70f96fe55a7\") " pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:11 crc kubenswrapper[4760]: I1227 05:50:11.059193 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pzc"] Dec 27 05:50:11 crc kubenswrapper[4760]: I1227 05:50:11.189839 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:11 crc kubenswrapper[4760]: I1227 05:50:11.354756 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pzc" event={"ID":"c33de9e2-91e8-425a-8250-5301c5aef450","Type":"ContainerStarted","Data":"d0c2672384d4e910bf8010c1ae9a66e92767d77149087ee0f2060b5d9881819b"} Dec 27 05:50:11 crc kubenswrapper[4760]: I1227 05:50:11.583519 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5xqc"] Dec 27 05:50:11 crc kubenswrapper[4760]: W1227 05:50:11.596869 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8924ab71_d3e4_4709_a666_c70f96fe55a7.slice/crio-46be7cdb69147e64e17988e088d59e1e951b92b9cf34aa349783914ae14c7089 WatchSource:0}: Error finding container 46be7cdb69147e64e17988e088d59e1e951b92b9cf34aa349783914ae14c7089: Status 404 returned error can't find the container with id 46be7cdb69147e64e17988e088d59e1e951b92b9cf34aa349783914ae14c7089 Dec 27 05:50:12 crc kubenswrapper[4760]: I1227 05:50:12.363341 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5xqc" event={"ID":"8924ab71-d3e4-4709-a666-c70f96fe55a7","Type":"ContainerStarted","Data":"46be7cdb69147e64e17988e088d59e1e951b92b9cf34aa349783914ae14c7089"} Dec 27 05:50:12 crc kubenswrapper[4760]: I1227 05:50:12.977620 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sjlcr"] Dec 27 05:50:12 crc kubenswrapper[4760]: I1227 05:50:12.978707 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:12 crc kubenswrapper[4760]: I1227 05:50:12.982813 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 27 05:50:12 crc kubenswrapper[4760]: I1227 05:50:12.999288 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjlcr"] Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.105205 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-utilities\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.105274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-catalog-content\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.105516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gjp\" (UniqueName: \"kubernetes.io/projected/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-kube-api-access-52gjp\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.182072 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9hbs"] Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.183902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.186212 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.202004 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9hbs"] Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.207428 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-utilities\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.207680 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-catalog-content\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.207807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52gjp\" (UniqueName: \"kubernetes.io/projected/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-kube-api-access-52gjp\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.208678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-utilities\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.208801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-catalog-content\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.232313 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52gjp\" (UniqueName: \"kubernetes.io/projected/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-kube-api-access-52gjp\") pod \"community-operators-sjlcr\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.309483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6fd0b7-8355-48f8-bc7b-80168772194d-catalog-content\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.309560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrq4\" (UniqueName: \"kubernetes.io/projected/9c6fd0b7-8355-48f8-bc7b-80168772194d-kube-api-access-pmrq4\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.309596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6fd0b7-8355-48f8-bc7b-80168772194d-utilities\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.315283 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.411046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6fd0b7-8355-48f8-bc7b-80168772194d-catalog-content\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.411296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrq4\" (UniqueName: \"kubernetes.io/projected/9c6fd0b7-8355-48f8-bc7b-80168772194d-kube-api-access-pmrq4\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.411323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6fd0b7-8355-48f8-bc7b-80168772194d-utilities\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.411566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c6fd0b7-8355-48f8-bc7b-80168772194d-catalog-content\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.411663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c6fd0b7-8355-48f8-bc7b-80168772194d-utilities\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.436383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrq4\" (UniqueName: \"kubernetes.io/projected/9c6fd0b7-8355-48f8-bc7b-80168772194d-kube-api-access-pmrq4\") pod \"redhat-operators-k9hbs\" (UID: \"9c6fd0b7-8355-48f8-bc7b-80168772194d\") " pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.512463 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.734105 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjlcr"] Dec 27 05:50:13 crc kubenswrapper[4760]: W1227 05:50:13.742517 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88183c3_8274_4fc3_85b8_5f1f4e15a77b.slice/crio-8171423e4969219ddbd5520acaa25b12a872773eb3ff91ded37c899525451dda WatchSource:0}: Error finding container 8171423e4969219ddbd5520acaa25b12a872773eb3ff91ded37c899525451dda: Status 404 returned error can't find the container with id 8171423e4969219ddbd5520acaa25b12a872773eb3ff91ded37c899525451dda Dec 27 05:50:13 crc kubenswrapper[4760]: I1227 05:50:13.902758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9hbs"] Dec 27 05:50:13 crc kubenswrapper[4760]: W1227 05:50:13.911641 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6fd0b7_8355_48f8_bc7b_80168772194d.slice/crio-e6a91befa96c6952ae8d4b14f5a33a15cac8701788ac3e3e81982f647d66cee4 WatchSource:0}: Error finding container e6a91befa96c6952ae8d4b14f5a33a15cac8701788ac3e3e81982f647d66cee4: Status 404 returned error can't find the container with id e6a91befa96c6952ae8d4b14f5a33a15cac8701788ac3e3e81982f647d66cee4 Dec 27 05:50:14 crc kubenswrapper[4760]: I1227 05:50:14.374393 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjlcr" event={"ID":"d88183c3-8274-4fc3-85b8-5f1f4e15a77b","Type":"ContainerStarted","Data":"8171423e4969219ddbd5520acaa25b12a872773eb3ff91ded37c899525451dda"} Dec 27 05:50:14 crc kubenswrapper[4760]: I1227 05:50:14.375317 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9hbs" event={"ID":"9c6fd0b7-8355-48f8-bc7b-80168772194d","Type":"ContainerStarted","Data":"e6a91befa96c6952ae8d4b14f5a33a15cac8701788ac3e3e81982f647d66cee4"} Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.390062 4760 generic.go:334] "Generic (PLEG): container finished" podID="9c6fd0b7-8355-48f8-bc7b-80168772194d" containerID="e3d7050c0951a36d463fbc825f417b1eda47aaf2a2068a086ddddf32e14c4ba1" exitCode=0 Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.390502 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9hbs" event={"ID":"9c6fd0b7-8355-48f8-bc7b-80168772194d","Type":"ContainerDied","Data":"e3d7050c0951a36d463fbc825f417b1eda47aaf2a2068a086ddddf32e14c4ba1"} Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.393224 4760 generic.go:334] "Generic (PLEG): container finished" podID="c33de9e2-91e8-425a-8250-5301c5aef450" containerID="436b72167119d2f39e8b8fc4f662339deafbec7933ad22e502f08d36402a20be" exitCode=0 Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.393324 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pzc" event={"ID":"c33de9e2-91e8-425a-8250-5301c5aef450","Type":"ContainerDied","Data":"436b72167119d2f39e8b8fc4f662339deafbec7933ad22e502f08d36402a20be"} Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.395726 4760 generic.go:334] "Generic (PLEG): container finished" podID="8924ab71-d3e4-4709-a666-c70f96fe55a7" containerID="acea54c75500189df54c09992790315bacc86bc48bbdd1d4269a4a0d41857a6f" exitCode=0 Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.395843 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5xqc" event={"ID":"8924ab71-d3e4-4709-a666-c70f96fe55a7","Type":"ContainerDied","Data":"acea54c75500189df54c09992790315bacc86bc48bbdd1d4269a4a0d41857a6f"} Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.397615 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerID="c42ee81510684501378f16244618a6d09858b67b4c62da198b049fe25d255c4e" exitCode=0 Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.397712 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjlcr" event={"ID":"d88183c3-8274-4fc3-85b8-5f1f4e15a77b","Type":"ContainerDied","Data":"c42ee81510684501378f16244618a6d09858b67b4c62da198b049fe25d255c4e"} Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.400920 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" event={"ID":"a8049c10-25cf-46ee-b24a-42b5e5af0d6a","Type":"ContainerStarted","Data":"67dab3c75241281a8b8adfc8ab9aab3e02faec3faa9247f54cce42f0afccdc3e"} Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.401595 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.407578 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" Dec 27 05:50:16 crc kubenswrapper[4760]: I1227 05:50:16.478015 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zhrlh" podStartSLOduration=8.477986514 podStartE2EDuration="8.477986514s" podCreationTimestamp="2025-12-27 05:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:50:16.468494915 +0000 UTC m=+339.228564230" watchObservedRunningTime="2025-12-27 05:50:16.477986514 +0000 UTC m=+339.238055849" Dec 27 05:50:17 crc kubenswrapper[4760]: I1227 05:50:17.408150 4760 generic.go:334] "Generic (PLEG): container finished" podID="c33de9e2-91e8-425a-8250-5301c5aef450" containerID="b656206dd1641ba5e50b2b87af12e76ab39a37ede9375f9c8f2116d82d4f510b" exitCode=0 Dec 27 05:50:17 crc kubenswrapper[4760]: I1227 05:50:17.408265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pzc" event={"ID":"c33de9e2-91e8-425a-8250-5301c5aef450","Type":"ContainerDied","Data":"b656206dd1641ba5e50b2b87af12e76ab39a37ede9375f9c8f2116d82d4f510b"} Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.416833 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerID="6b5457ff0d3ee06cad60b58f58febf3bf4efdf84f6f5149ab7ce19466bbccbe7" exitCode=0 Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.416892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjlcr" event={"ID":"d88183c3-8274-4fc3-85b8-5f1f4e15a77b","Type":"ContainerDied","Data":"6b5457ff0d3ee06cad60b58f58febf3bf4efdf84f6f5149ab7ce19466bbccbe7"} Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.420523 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pzc" event={"ID":"c33de9e2-91e8-425a-8250-5301c5aef450","Type":"ContainerStarted","Data":"815d09068cb366809a0c29adbe7948a44270cb11aa0feb2e267ddda6fe6f0cd7"} Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.425226 4760 generic.go:334] "Generic (PLEG): container finished" podID="9c6fd0b7-8355-48f8-bc7b-80168772194d" containerID="5472eac423e1bce4f3413a8434231f45aecf0cc0c93a02364c1ae2b73c0229eb" exitCode=0 Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.425257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9hbs" event={"ID":"9c6fd0b7-8355-48f8-bc7b-80168772194d","Type":"ContainerDied","Data":"5472eac423e1bce4f3413a8434231f45aecf0cc0c93a02364c1ae2b73c0229eb"} Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.427153 4760 generic.go:334] "Generic (PLEG): container finished" podID="8924ab71-d3e4-4709-a666-c70f96fe55a7" containerID="5ed806dafe46aa2172ccfd70869f1c5e034773082fb7fcabee157104651c44dc" exitCode=0 Dec 27 05:50:18 crc kubenswrapper[4760]: I1227 05:50:18.427234 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5xqc" event={"ID":"8924ab71-d3e4-4709-a666-c70f96fe55a7","Type":"ContainerDied","Data":"5ed806dafe46aa2172ccfd70869f1c5e034773082fb7fcabee157104651c44dc"} Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.029876 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" podUID="bf23480c-73e2-4c48-b39c-92ef17211274" containerName="registry" containerID="cri-o://6bce21252d21524a8d0cf18bb5333a63c11e1305deb2ad9cec3113cdc88c1269" gracePeriod=30 Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.440515 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9hbs" event={"ID":"9c6fd0b7-8355-48f8-bc7b-80168772194d","Type":"ContainerStarted","Data":"a5767f16f9e226b77517465ee8f9a719f3d584441a641a8e03ae66f90932e5e3"} Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.446247 4760 generic.go:334] "Generic (PLEG): container finished" podID="bf23480c-73e2-4c48-b39c-92ef17211274" containerID="6bce21252d21524a8d0cf18bb5333a63c11e1305deb2ad9cec3113cdc88c1269" exitCode=0 Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.446347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" event={"ID":"bf23480c-73e2-4c48-b39c-92ef17211274","Type":"ContainerDied","Data":"6bce21252d21524a8d0cf18bb5333a63c11e1305deb2ad9cec3113cdc88c1269"} Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.463865 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9hbs" podStartSLOduration=3.644981997 podStartE2EDuration="6.463839641s" podCreationTimestamp="2025-12-27 05:50:13 +0000 UTC" firstStartedPulling="2025-12-27 05:50:16.39227145 +0000 UTC m=+339.152340805" lastFinishedPulling="2025-12-27 05:50:19.211129134 +0000 UTC m=+341.971198449" observedRunningTime="2025-12-27 05:50:19.460560172 +0000 UTC m=+342.220629517" watchObservedRunningTime="2025-12-27 05:50:19.463839641 +0000 UTC m=+342.223908976" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.464511 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-75pzc" podStartSLOduration=7.727545216 podStartE2EDuration="9.464499938s" podCreationTimestamp="2025-12-27 05:50:10 +0000 UTC" firstStartedPulling="2025-12-27 05:50:16.394862842 +0000 UTC m=+339.154932157" lastFinishedPulling="2025-12-27 05:50:18.131817564 +0000 UTC m=+340.891886879" observedRunningTime="2025-12-27 05:50:18.529611814 +0000 UTC m=+341.289681129" watchObservedRunningTime="2025-12-27 05:50:19.464499938 +0000 UTC m=+342.224569273" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.492346 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.597319 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-registry-tls\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.597891 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-bound-sa-token\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.597935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-trusted-ca\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.598069 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf23480c-73e2-4c48-b39c-92ef17211274-ca-trust-extracted\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.598189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-registry-certificates\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.599064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.599255 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.599431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64tmq\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-kube-api-access-64tmq\") pod \"bf23480c-73e2-4c48-b39c-92ef17211274\" (UID: \"bf23480c-73e2-4c48-b39c-92ef17211274\") " Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.599602 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.600335 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.601269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.607548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.607976 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.608214 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-kube-api-access-64tmq" (OuterVolumeSpecName: "kube-api-access-64tmq") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "kube-api-access-64tmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.612395 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.619594 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.626909 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf23480c-73e2-4c48-b39c-92ef17211274-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bf23480c-73e2-4c48-b39c-92ef17211274" (UID: "bf23480c-73e2-4c48-b39c-92ef17211274"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.701562 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf23480c-73e2-4c48-b39c-92ef17211274-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.701875 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf23480c-73e2-4c48-b39c-92ef17211274-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.701962 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64tmq\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-kube-api-access-64tmq\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.702024 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.702078 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf23480c-73e2-4c48-b39c-92ef17211274-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:19 crc kubenswrapper[4760]: I1227 05:50:19.702162 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf23480c-73e2-4c48-b39c-92ef17211274-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.468357 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.468561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzs8s" event={"ID":"bf23480c-73e2-4c48-b39c-92ef17211274","Type":"ContainerDied","Data":"821737ab9bf9f94da76425dba1bf706fcbe21f5048d02a595d7083a300486442"} Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.469515 4760 scope.go:117] "RemoveContainer" containerID="6bce21252d21524a8d0cf18bb5333a63c11e1305deb2ad9cec3113cdc88c1269" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.473634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5xqc" event={"ID":"8924ab71-d3e4-4709-a666-c70f96fe55a7","Type":"ContainerStarted","Data":"641a6d82fc870eb8a43ce5d7f0f7ecf4611d3340f1377c78ac3c5464064129ab"} Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.482837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjlcr" event={"ID":"d88183c3-8274-4fc3-85b8-5f1f4e15a77b","Type":"ContainerStarted","Data":"4732e962a6c9ea98e2cd1480a7c6ae2c776c620700221dd31d4a07b36f0ab8c9"} Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.501016 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5xqc" podStartSLOduration=7.331344704 podStartE2EDuration="10.500995121s" podCreationTimestamp="2025-12-27 05:50:10 +0000 UTC" firstStartedPulling="2025-12-27 05:50:16.399374401 +0000 UTC m=+339.159443716" lastFinishedPulling="2025-12-27 05:50:19.569024818 +0000 UTC m=+342.329094133" observedRunningTime="2025-12-27 05:50:20.492437754 +0000 UTC m=+343.252507079" watchObservedRunningTime="2025-12-27 05:50:20.500995121 +0000 UTC m=+343.261064436" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.517172 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sjlcr" podStartSLOduration=5.526253264 podStartE2EDuration="8.517153892s" podCreationTimestamp="2025-12-27 05:50:12 +0000 UTC" firstStartedPulling="2025-12-27 05:50:16.400069019 +0000 UTC m=+339.160138334" lastFinishedPulling="2025-12-27 05:50:19.390969647 +0000 UTC m=+342.151038962" observedRunningTime="2025-12-27 05:50:20.515180804 +0000 UTC m=+343.275250119" watchObservedRunningTime="2025-12-27 05:50:20.517153892 +0000 UTC m=+343.277223217" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.526989 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzs8s"] Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.532751 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzs8s"] Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.897254 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.897597 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:20 crc kubenswrapper[4760]: I1227 05:50:20.947837 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:21 crc kubenswrapper[4760]: I1227 05:50:21.190828 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:21 crc kubenswrapper[4760]: I1227 05:50:21.191000 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:21 crc kubenswrapper[4760]: I1227 05:50:21.510477 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf23480c-73e2-4c48-b39c-92ef17211274" path="/var/lib/kubelet/pods/bf23480c-73e2-4c48-b39c-92ef17211274/volumes" Dec 27 05:50:22 crc kubenswrapper[4760]: I1227 05:50:22.228305 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f5xqc" podUID="8924ab71-d3e4-4709-a666-c70f96fe55a7" containerName="registry-server" probeResult="failure" output=< Dec 27 05:50:22 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Dec 27 05:50:22 crc kubenswrapper[4760]: > Dec 27 05:50:23 crc kubenswrapper[4760]: I1227 05:50:23.326329 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:23 crc kubenswrapper[4760]: I1227 05:50:23.326617 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:23 crc kubenswrapper[4760]: I1227 05:50:23.369633 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:23 crc kubenswrapper[4760]: I1227 05:50:23.513449 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:23 crc kubenswrapper[4760]: I1227 05:50:23.513828 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:24 crc kubenswrapper[4760]: I1227 05:50:24.570121 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9hbs" podUID="9c6fd0b7-8355-48f8-bc7b-80168772194d" containerName="registry-server" probeResult="failure" output=< Dec 27 05:50:24 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Dec 27 05:50:24 crc kubenswrapper[4760]: > Dec 27 05:50:30 crc kubenswrapper[4760]: I1227 05:50:30.948596 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-75pzc" Dec 27 05:50:31 crc kubenswrapper[4760]: I1227 05:50:31.237414 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:31 crc kubenswrapper[4760]: I1227 05:50:31.291745 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5xqc" Dec 27 05:50:33 crc kubenswrapper[4760]: I1227 05:50:33.388920 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:50:33 crc kubenswrapper[4760]: I1227 05:50:33.550636 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:33 crc kubenswrapper[4760]: I1227 05:50:33.606622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9hbs" Dec 27 05:50:35 crc kubenswrapper[4760]: I1227 05:50:35.287773 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:50:35 crc kubenswrapper[4760]: I1227 05:50:35.287864 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:50:43 crc kubenswrapper[4760]: I1227 05:50:43.444460 4760 scope.go:117] "RemoveContainer" containerID="7f31b4c769e22ccaba5cef0b175ea339810575e267bee3e06951201a91518f42" Dec 27 05:50:43 crc kubenswrapper[4760]: I1227 05:50:43.466272 4760 scope.go:117] "RemoveContainer" containerID="79463608c1d23c9c8283ed1f1035e7034ff016e8b4dfbdfac86a796826382a1f" Dec 27 05:50:43 crc kubenswrapper[4760]: I1227 05:50:43.490666 4760 scope.go:117] "RemoveContainer" containerID="6830a51cded083da67ed8699a7efe61f47829115d1a196ca128c3dfdfe3b5711" Dec 27 05:50:43 crc kubenswrapper[4760]: I1227 05:50:43.515230 4760 scope.go:117] "RemoveContainer" containerID="aca52de2a43a5cbb00c96b7515199930bd22fbf6f173bfcdc4524bb7db15a7d3" Dec 27 05:51:05 crc kubenswrapper[4760]: I1227 05:51:05.287900 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:51:05 crc kubenswrapper[4760]: I1227 05:51:05.288762 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.288119 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.288781 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.288839 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.289558 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c28937ac367ba8ff932ff70ef08e94c615fc93fe52c601f6ac3e8ae45cf34b0d"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.289624 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://c28937ac367ba8ff932ff70ef08e94c615fc93fe52c601f6ac3e8ae45cf34b0d" gracePeriod=600 Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.961736 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="c28937ac367ba8ff932ff70ef08e94c615fc93fe52c601f6ac3e8ae45cf34b0d" exitCode=0 Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.961792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"c28937ac367ba8ff932ff70ef08e94c615fc93fe52c601f6ac3e8ae45cf34b0d"} Dec 27 05:51:35 crc kubenswrapper[4760]: I1227 05:51:35.961840 4760 scope.go:117] "RemoveContainer" containerID="16af3f876a882d78e7581b4301324630eb28d1b1a7c207dc8bc8b06c62da9d79" Dec 27 05:51:36 crc kubenswrapper[4760]: I1227 05:51:36.969049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"6c09ce728c7de3887001e96cbc48ccd41a2a698cb90d9eedf6f87ae94181d9e6"} Dec 27 05:51:43 crc kubenswrapper[4760]: I1227 05:51:43.640435 4760 scope.go:117] "RemoveContainer" containerID="de5c0775238cfb17641ac375a8fb90a4d2e8fccc6affb04932b3d1ef9758d316" Dec 27 05:51:43 crc kubenswrapper[4760]: I1227 05:51:43.658230 4760 scope.go:117] "RemoveContainer" containerID="fb2b5dd4095f0ac0f6130812a7418b08cca483761b802689df42548f7b3a9928" Dec 27 05:52:12 crc kubenswrapper[4760]: E1227 05:52:12.630905 4760 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.129s" Dec 27 05:54:05 crc kubenswrapper[4760]: I1227 05:54:05.287854 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:54:05 crc kubenswrapper[4760]: I1227 05:54:05.288824 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:54:35 crc kubenswrapper[4760]: I1227 05:54:35.288734 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:54:35 crc kubenswrapper[4760]: I1227 05:54:35.291388 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:55:05 crc kubenswrapper[4760]: I1227 05:55:05.287721 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:55:05 crc kubenswrapper[4760]: I1227 05:55:05.288533 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:55:05 crc kubenswrapper[4760]: I1227 05:55:05.288620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:55:05 crc kubenswrapper[4760]: I1227 05:55:05.289628 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c09ce728c7de3887001e96cbc48ccd41a2a698cb90d9eedf6f87ae94181d9e6"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 05:55:05 crc kubenswrapper[4760]: I1227 05:55:05.289743 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://6c09ce728c7de3887001e96cbc48ccd41a2a698cb90d9eedf6f87ae94181d9e6" gracePeriod=600 Dec 27 05:55:06 crc kubenswrapper[4760]: I1227 05:55:06.418726 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="6c09ce728c7de3887001e96cbc48ccd41a2a698cb90d9eedf6f87ae94181d9e6" exitCode=0 Dec 27 05:55:06 crc kubenswrapper[4760]: I1227 05:55:06.419466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"6c09ce728c7de3887001e96cbc48ccd41a2a698cb90d9eedf6f87ae94181d9e6"} Dec 27 05:55:06 crc kubenswrapper[4760]: I1227 05:55:06.419517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"8bdfa6927c42e93e91cee65f61297025ba0cd7530f28a85f4fd103174d5484b4"} Dec 27 05:55:06 crc kubenswrapper[4760]: I1227 05:55:06.419557 4760 scope.go:117] "RemoveContainer" containerID="c28937ac367ba8ff932ff70ef08e94c615fc93fe52c601f6ac3e8ae45cf34b0d" Dec 27 05:55:58 crc kubenswrapper[4760]: I1227 05:55:58.550589 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.778629 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m"] Dec 27 05:57:20 crc kubenswrapper[4760]: E1227 05:57:20.779349 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf23480c-73e2-4c48-b39c-92ef17211274" containerName="registry" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.779363 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf23480c-73e2-4c48-b39c-92ef17211274" containerName="registry" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.779476 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf23480c-73e2-4c48-b39c-92ef17211274" containerName="registry" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.780211 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.782069 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.787383 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m"] Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.941138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.941206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5cvm\" (UniqueName: \"kubernetes.io/projected/030247e7-bd9a-4c58-8c3c-aad08db6895d-kube-api-access-m5cvm\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:20 crc kubenswrapper[4760]: I1227 05:57:20.941242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.042032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.042141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.042176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5cvm\" (UniqueName: \"kubernetes.io/projected/030247e7-bd9a-4c58-8c3c-aad08db6895d-kube-api-access-m5cvm\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.042627 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.042931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.062033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5cvm\" (UniqueName: \"kubernetes.io/projected/030247e7-bd9a-4c58-8c3c-aad08db6895d-kube-api-access-m5cvm\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.128397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:21 crc kubenswrapper[4760]: I1227 05:57:21.548849 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m"] Dec 27 05:57:22 crc kubenswrapper[4760]: I1227 05:57:22.318588 4760 generic.go:334] "Generic (PLEG): container finished" podID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerID="c8dbf30fcbea65feabcbabe6f9b3a1f7a9b942c10a89f39d69cf0288efcf92fb" exitCode=0 Dec 27 05:57:22 crc kubenswrapper[4760]: I1227 05:57:22.318660 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" event={"ID":"030247e7-bd9a-4c58-8c3c-aad08db6895d","Type":"ContainerDied","Data":"c8dbf30fcbea65feabcbabe6f9b3a1f7a9b942c10a89f39d69cf0288efcf92fb"} Dec 27 05:57:22 crc kubenswrapper[4760]: I1227 05:57:22.318901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" event={"ID":"030247e7-bd9a-4c58-8c3c-aad08db6895d","Type":"ContainerStarted","Data":"c708b1f171139a97e408f035af1f6e545807448f6818cd292fe42d406e3d5ad8"} Dec 27 05:57:22 crc kubenswrapper[4760]: I1227 05:57:22.320626 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.105954 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nbhlj"] Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.107277 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.117164 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nbhlj"] Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.171969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-utilities\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.172047 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkpwv\" (UniqueName: \"kubernetes.io/projected/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-kube-api-access-pkpwv\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.172175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-catalog-content\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.273274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkpwv\" (UniqueName: \"kubernetes.io/projected/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-kube-api-access-pkpwv\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.273357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-catalog-content\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.273399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-utilities\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.273815 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-utilities\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.273928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-catalog-content\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.291803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkpwv\" (UniqueName: \"kubernetes.io/projected/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-kube-api-access-pkpwv\") pod \"redhat-operators-nbhlj\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:23 crc kubenswrapper[4760]: I1227 05:57:23.474968 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:24 crc kubenswrapper[4760]: I1227 05:57:24.038599 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nbhlj"] Dec 27 05:57:24 crc kubenswrapper[4760]: I1227 05:57:24.338577 4760 generic.go:334] "Generic (PLEG): container finished" podID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerID="7708b2234bb72afb71493364d39836c05bb2fed6fbb5d43d718f5528336a0f7e" exitCode=0 Dec 27 05:57:24 crc kubenswrapper[4760]: I1227 05:57:24.338653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerDied","Data":"7708b2234bb72afb71493364d39836c05bb2fed6fbb5d43d718f5528336a0f7e"} Dec 27 05:57:24 crc kubenswrapper[4760]: I1227 05:57:24.338683 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerStarted","Data":"5528e5d5884cbe6d499a055c0acc491b7ab816235a397bd7ffc48bcb33fc3f9c"} Dec 27 05:57:24 crc kubenswrapper[4760]: I1227 05:57:24.340104 4760 generic.go:334] "Generic (PLEG): container finished" podID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerID="634bfa328a09f4a476525b663c014d1f047078e48afaa873edc0598e40848298" exitCode=0 Dec 27 05:57:24 crc kubenswrapper[4760]: I1227 05:57:24.340134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" event={"ID":"030247e7-bd9a-4c58-8c3c-aad08db6895d","Type":"ContainerDied","Data":"634bfa328a09f4a476525b663c014d1f047078e48afaa873edc0598e40848298"} Dec 27 05:57:25 crc kubenswrapper[4760]: I1227 05:57:25.360934 4760 generic.go:334] "Generic (PLEG): container finished" podID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerID="bd21b4a09e79487866d93f51a3ab21f71a9d37cd6acf98fb2663f57fcb91967a" exitCode=0 Dec 27 05:57:25 crc kubenswrapper[4760]: I1227 05:57:25.361061 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" event={"ID":"030247e7-bd9a-4c58-8c3c-aad08db6895d","Type":"ContainerDied","Data":"bd21b4a09e79487866d93f51a3ab21f71a9d37cd6acf98fb2663f57fcb91967a"} Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.370030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerStarted","Data":"ec14fe761e319f208650a2c21478593489a67a10f54fa0ba76a27e9b1d6bbe6f"} Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.624472 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.725467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-bundle\") pod \"030247e7-bd9a-4c58-8c3c-aad08db6895d\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.726259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-bundle" (OuterVolumeSpecName: "bundle") pod "030247e7-bd9a-4c58-8c3c-aad08db6895d" (UID: "030247e7-bd9a-4c58-8c3c-aad08db6895d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.726305 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5cvm\" (UniqueName: \"kubernetes.io/projected/030247e7-bd9a-4c58-8c3c-aad08db6895d-kube-api-access-m5cvm\") pod \"030247e7-bd9a-4c58-8c3c-aad08db6895d\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.726401 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-util\") pod \"030247e7-bd9a-4c58-8c3c-aad08db6895d\" (UID: \"030247e7-bd9a-4c58-8c3c-aad08db6895d\") " Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.726938 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.733486 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030247e7-bd9a-4c58-8c3c-aad08db6895d-kube-api-access-m5cvm" (OuterVolumeSpecName: "kube-api-access-m5cvm") pod "030247e7-bd9a-4c58-8c3c-aad08db6895d" (UID: "030247e7-bd9a-4c58-8c3c-aad08db6895d"). InnerVolumeSpecName "kube-api-access-m5cvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.744953 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-util" (OuterVolumeSpecName: "util") pod "030247e7-bd9a-4c58-8c3c-aad08db6895d" (UID: "030247e7-bd9a-4c58-8c3c-aad08db6895d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.827907 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5cvm\" (UniqueName: \"kubernetes.io/projected/030247e7-bd9a-4c58-8c3c-aad08db6895d-kube-api-access-m5cvm\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:26 crc kubenswrapper[4760]: I1227 05:57:26.828194 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/030247e7-bd9a-4c58-8c3c-aad08db6895d-util\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:27 crc kubenswrapper[4760]: I1227 05:57:27.378222 4760 generic.go:334] "Generic (PLEG): container finished" podID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerID="ec14fe761e319f208650a2c21478593489a67a10f54fa0ba76a27e9b1d6bbe6f" exitCode=0 Dec 27 05:57:27 crc kubenswrapper[4760]: I1227 05:57:27.378350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerDied","Data":"ec14fe761e319f208650a2c21478593489a67a10f54fa0ba76a27e9b1d6bbe6f"} Dec 27 05:57:27 crc kubenswrapper[4760]: I1227 05:57:27.381039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" event={"ID":"030247e7-bd9a-4c58-8c3c-aad08db6895d","Type":"ContainerDied","Data":"c708b1f171139a97e408f035af1f6e545807448f6818cd292fe42d406e3d5ad8"} Dec 27 05:57:27 crc kubenswrapper[4760]: I1227 05:57:27.381071 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c708b1f171139a97e408f035af1f6e545807448f6818cd292fe42d406e3d5ad8" Dec 27 05:57:27 crc kubenswrapper[4760]: I1227 05:57:27.381146 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m" Dec 27 05:57:28 crc kubenswrapper[4760]: I1227 05:57:28.393557 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerStarted","Data":"64fe480911dcb1d63230fc25c285a2091c733b74f960e27749483ba49f5f3f71"} Dec 27 05:57:28 crc kubenswrapper[4760]: I1227 05:57:28.413717 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nbhlj" podStartSLOduration=2.811352033 podStartE2EDuration="5.413703613s" podCreationTimestamp="2025-12-27 05:57:23 +0000 UTC" firstStartedPulling="2025-12-27 05:57:25.363461481 +0000 UTC m=+768.123530826" lastFinishedPulling="2025-12-27 05:57:27.965813101 +0000 UTC m=+770.725882406" observedRunningTime="2025-12-27 05:57:28.409346917 +0000 UTC m=+771.169416232" watchObservedRunningTime="2025-12-27 05:57:28.413703613 +0000 UTC m=+771.173772928" Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.321347 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmm9v"] Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322152 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-controller" containerID="cri-o://a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322178 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="nbdb" containerID="cri-o://53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322226 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322308 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-node" containerID="cri-o://105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322361 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="sbdb" containerID="cri-o://c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322445 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="northd" containerID="cri-o://b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.322591 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-acl-logging" containerID="cri-o://5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" gracePeriod=30 Dec 27 05:57:30 crc kubenswrapper[4760]: I1227 05:57:30.368643 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" containerID="cri-o://ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" gracePeriod=30 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.252746 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-qcdnn"] Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.252958 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="util" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.252973 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="util" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.252992 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="extract" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.253000 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="extract" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.253012 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="pull" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.253021 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="pull" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.253226 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="030247e7-bd9a-4c58-8c3c-aad08db6895d" containerName="extract" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.253661 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.256427 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.256487 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.256878 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rf885" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.283109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zc5k\" (UniqueName: \"kubernetes.io/projected/cb7c9b2c-f35f-42bf-8419-5c0615323e3a-kube-api-access-8zc5k\") pod \"nmstate-operator-6769fb99d-qcdnn\" (UID: \"cb7c9b2c-f35f-42bf-8419-5c0615323e3a\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.374919 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovnkube-controller/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.377320 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovn-acl-logging/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.377920 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovn-controller/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.378561 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383581 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-ovn\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383617 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-var-lib-openvswitch\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-script-lib\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383673 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-ovn-kubernetes\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-bin\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383707 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-systemd\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383708 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383728 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-slash\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383753 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g5xx\" (UniqueName: \"kubernetes.io/projected/7c8aa557-e11a-4c40-9179-22811f44ff18-kube-api-access-8g5xx\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383761 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383750 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383772 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383808 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383842 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383855 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-openvswitch\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383885 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-systemd-units\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-slash" (OuterVolumeSpecName: "host-slash") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383911 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c8aa557-e11a-4c40-9179-22811f44ff18-ovn-node-metrics-cert\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-log-socket\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383952 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-etc-openvswitch\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.383975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-kubelet\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-netd\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384028 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-node-log\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384032 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-config\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384115 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-netns\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384148 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-env-overrides\") pod \"7c8aa557-e11a-4c40-9179-22811f44ff18\" (UID: \"7c8aa557-e11a-4c40-9179-22811f44ff18\") " Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zc5k\" (UniqueName: \"kubernetes.io/projected/cb7c9b2c-f35f-42bf-8419-5c0615323e3a-kube-api-access-8zc5k\") pod \"nmstate-operator-6769fb99d-qcdnn\" (UID: \"cb7c9b2c-f35f-42bf-8419-5c0615323e3a\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384685 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-log-socket" (OuterVolumeSpecName: "log-socket") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384716 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384720 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384772 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384831 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-node-log" (OuterVolumeSpecName: "node-log") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384968 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.384997 4760 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385019 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385035 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385049 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385059 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385071 4760 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-slash\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385081 4760 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385143 4760 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385154 4760 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.385169 4760 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.396487 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8aa557-e11a-4c40-9179-22811f44ff18-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.397534 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8aa557-e11a-4c40-9179-22811f44ff18-kube-api-access-8g5xx" (OuterVolumeSpecName: "kube-api-access-8g5xx") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "kube-api-access-8g5xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.405704 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7c8aa557-e11a-4c40-9179-22811f44ff18" (UID: "7c8aa557-e11a-4c40-9179-22811f44ff18"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.409293 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zc5k\" (UniqueName: \"kubernetes.io/projected/cb7c9b2c-f35f-42bf-8419-5c0615323e3a-kube-api-access-8zc5k\") pod \"nmstate-operator-6769fb99d-qcdnn\" (UID: \"cb7c9b2c-f35f-42bf-8419-5c0615323e3a\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.412128 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovnkube-controller/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.414987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovn-acl-logging/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.416486 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rmm9v_7c8aa557-e11a-4c40-9179-22811f44ff18/ovn-controller/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.416915 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" exitCode=0 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.416962 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" exitCode=0 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.416976 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" exitCode=0 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.416990 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" exitCode=0 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417003 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" exitCode=0 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417014 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" exitCode=0 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417026 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" exitCode=143 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417041 4760 generic.go:334] "Generic (PLEG): container finished" podID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" exitCode=143 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417233 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417271 4760 scope.go:117] "RemoveContainer" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417342 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417479 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417520 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417540 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417550 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417560 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417569 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417579 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417588 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417600 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417611 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417624 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417637 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417650 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417660 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417669 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417678 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417688 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417697 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417707 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417716 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417725 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417736 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417750 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417761 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417770 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417781 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417792 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417801 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417810 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417819 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417828 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417837 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rmm9v" event={"ID":"7c8aa557-e11a-4c40-9179-22811f44ff18","Type":"ContainerDied","Data":"b9c9b4e1450c371e7cdf9537f7c5669fd91a557d37989de2aa9b4fadec16c44e"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417870 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417881 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417891 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417901 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417910 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417919 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417929 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417939 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417949 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.417959 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.425416 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmk6w_b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d/kube-multus/0.log" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.425493 4760 generic.go:334] "Generic (PLEG): container finished" podID="b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d" containerID="2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162" exitCode=2 Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.425538 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmk6w" event={"ID":"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d","Type":"ContainerDied","Data":"2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162"} Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.426321 4760 scope.go:117] "RemoveContainer" containerID="2f484c0d01f8134fea31e049486a135049ba2ff9479ce05faf063f37c7aa2162" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.442458 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5vn8h"] Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443074 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="northd" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443119 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="northd" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443134 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kubecfg-setup" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443144 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kubecfg-setup" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443161 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="sbdb" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443174 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="sbdb" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443191 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-acl-logging" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443201 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-acl-logging" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443212 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443223 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443239 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-node" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443249 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-node" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443263 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-ovn-metrics" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443274 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-ovn-metrics" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443293 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443303 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443316 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="nbdb" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443326 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="nbdb" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443492 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-acl-logging" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443507 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="sbdb" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443522 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovn-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443533 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="northd" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443547 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443564 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="nbdb" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443576 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-ovn-metrics" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443593 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="kube-rbac-proxy-node" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.443744 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.443757 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" containerName="ovnkube-controller" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.446547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.454383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.456517 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.456703 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.457891 4760 scope.go:117] "RemoveContainer" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-log-socket\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485892 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-cni-netd\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-etc-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-kubelet\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-var-lib-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485960 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjpm\" (UniqueName: \"kubernetes.io/projected/80a33ee8-93b8-4418-aac2-0f651bfa431e-kube-api-access-zdjpm\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485976 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.485994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-run-ovn-kubernetes\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486048 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-ovn\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486083 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovnkube-config\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-slash\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-systemd\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovnkube-script-lib\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovn-node-metrics-cert\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486796 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-cni-bin\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-run-netns\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-node-log\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486962 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-systemd-units\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.486980 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-env-overrides\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487031 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487041 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487050 4760 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487060 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g5xx\" (UniqueName: \"kubernetes.io/projected/7c8aa557-e11a-4c40-9179-22811f44ff18-kube-api-access-8g5xx\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487069 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c8aa557-e11a-4c40-9179-22811f44ff18-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487077 4760 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-log-socket\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487101 4760 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487180 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487222 4760 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c8aa557-e11a-4c40-9179-22811f44ff18-node-log\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.487235 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c8aa557-e11a-4c40-9179-22811f44ff18-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.493470 4760 scope.go:117] "RemoveContainer" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.531800 4760 scope.go:117] "RemoveContainer" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.540825 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmm9v"] Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.560715 4760 scope.go:117] "RemoveContainer" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.567611 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rmm9v"] Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.577702 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.583680 4760 scope.go:117] "RemoveContainer" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590207 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-run-netns\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-node-log\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590314 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-systemd-units\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590351 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-env-overrides\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-log-socket\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-cni-netd\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-kubelet\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590442 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-etc-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-var-lib-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590485 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjpm\" (UniqueName: \"kubernetes.io/projected/80a33ee8-93b8-4418-aac2-0f651bfa431e-kube-api-access-zdjpm\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590520 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590540 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-run-ovn-kubernetes\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590593 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-ovn\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovnkube-config\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590638 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-slash\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-systemd\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovnkube-script-lib\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovn-node-metrics-cert\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590727 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-cni-bin\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-cni-bin\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590845 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590867 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-run-netns\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590887 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-node-log\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.590906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-systemd-units\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591395 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-env-overrides\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-log-socket\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-cni-netd\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-kubelet\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-etc-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-var-lib-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591798 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-systemd\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591861 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-slash\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-openvswitch\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.591803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-host-run-ovn-kubernetes\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.592105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/80a33ee8-93b8-4418-aac2-0f651bfa431e-run-ovn\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.592275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovnkube-config\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.593454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovnkube-script-lib\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.605389 4760 scope.go:117] "RemoveContainer" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.605662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/80a33ee8-93b8-4418-aac2-0f651bfa431e-ovn-node-metrics-cert\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.610211 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjpm\" (UniqueName: \"kubernetes.io/projected/80a33ee8-93b8-4418-aac2-0f651bfa431e-kube-api-access-zdjpm\") pod \"ovnkube-node-5vn8h\" (UID: \"80a33ee8-93b8-4418-aac2-0f651bfa431e\") " pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.625661 4760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(ed5c3c6f6d489780d07a984c487c95cc509e1867f55ff0a2600de845a3423de0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.625826 4760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(ed5c3c6f6d489780d07a984c487c95cc509e1867f55ff0a2600de845a3423de0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.625862 4760 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(ed5c3c6f6d489780d07a984c487c95cc509e1867f55ff0a2600de845a3423de0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.625988 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-6769fb99d-qcdnn_openshift-nmstate(cb7c9b2c-f35f-42bf-8419-5c0615323e3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-6769fb99d-qcdnn_openshift-nmstate(cb7c9b2c-f35f-42bf-8419-5c0615323e3a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(ed5c3c6f6d489780d07a984c487c95cc509e1867f55ff0a2600de845a3423de0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" podUID="cb7c9b2c-f35f-42bf-8419-5c0615323e3a" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.649336 4760 scope.go:117] "RemoveContainer" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.672130 4760 scope.go:117] "RemoveContainer" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.691834 4760 scope.go:117] "RemoveContainer" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.706607 4760 scope.go:117] "RemoveContainer" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.707285 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": container with ID starting with ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f not found: ID does not exist" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.707329 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} err="failed to get container status \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": rpc error: code = NotFound desc = could not find container \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": container with ID starting with ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.707365 4760 scope.go:117] "RemoveContainer" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.709270 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": container with ID starting with 5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1 not found: ID does not exist" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.709312 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} err="failed to get container status \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": rpc error: code = NotFound desc = could not find container \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": container with ID starting with 5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.709341 4760 scope.go:117] "RemoveContainer" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.709740 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": container with ID starting with c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396 not found: ID does not exist" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.709777 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} err="failed to get container status \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": rpc error: code = NotFound desc = could not find container \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": container with ID starting with c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.709802 4760 scope.go:117] "RemoveContainer" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.710238 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": container with ID starting with 53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826 not found: ID does not exist" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.710329 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} err="failed to get container status \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": rpc error: code = NotFound desc = could not find container \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": container with ID starting with 53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.710408 4760 scope.go:117] "RemoveContainer" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.710770 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": container with ID starting with b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba not found: ID does not exist" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.710857 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} err="failed to get container status \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": rpc error: code = NotFound desc = could not find container \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": container with ID starting with b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.710921 4760 scope.go:117] "RemoveContainer" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.711262 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": container with ID starting with 498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a not found: ID does not exist" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.711287 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} err="failed to get container status \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": rpc error: code = NotFound desc = could not find container \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": container with ID starting with 498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.711301 4760 scope.go:117] "RemoveContainer" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.711638 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": container with ID starting with 105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d not found: ID does not exist" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.711679 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} err="failed to get container status \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": rpc error: code = NotFound desc = could not find container \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": container with ID starting with 105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.711709 4760 scope.go:117] "RemoveContainer" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.711994 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": container with ID starting with 5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156 not found: ID does not exist" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.712076 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} err="failed to get container status \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": rpc error: code = NotFound desc = could not find container \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": container with ID starting with 5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.712158 4760 scope.go:117] "RemoveContainer" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.712515 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": container with ID starting with a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e not found: ID does not exist" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.712583 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} err="failed to get container status \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": rpc error: code = NotFound desc = could not find container \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": container with ID starting with a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.712639 4760 scope.go:117] "RemoveContainer" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" Dec 27 05:57:31 crc kubenswrapper[4760]: E1227 05:57:31.713009 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": container with ID starting with 203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c not found: ID does not exist" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713043 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} err="failed to get container status \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": rpc error: code = NotFound desc = could not find container \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": container with ID starting with 203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713058 4760 scope.go:117] "RemoveContainer" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713297 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} err="failed to get container status \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": rpc error: code = NotFound desc = could not find container \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": container with ID starting with ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713315 4760 scope.go:117] "RemoveContainer" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713513 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} err="failed to get container status \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": rpc error: code = NotFound desc = could not find container \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": container with ID starting with 5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713529 4760 scope.go:117] "RemoveContainer" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713704 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} err="failed to get container status \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": rpc error: code = NotFound desc = could not find container \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": container with ID starting with c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713720 4760 scope.go:117] "RemoveContainer" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713896 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} err="failed to get container status \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": rpc error: code = NotFound desc = could not find container \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": container with ID starting with 53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.713914 4760 scope.go:117] "RemoveContainer" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.715213 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} err="failed to get container status \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": rpc error: code = NotFound desc = could not find container \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": container with ID starting with b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.715289 4760 scope.go:117] "RemoveContainer" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.716815 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} err="failed to get container status \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": rpc error: code = NotFound desc = could not find container \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": container with ID starting with 498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.716923 4760 scope.go:117] "RemoveContainer" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.717195 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} err="failed to get container status \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": rpc error: code = NotFound desc = could not find container \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": container with ID starting with 105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.717218 4760 scope.go:117] "RemoveContainer" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.717493 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} err="failed to get container status \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": rpc error: code = NotFound desc = could not find container \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": container with ID starting with 5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.717570 4760 scope.go:117] "RemoveContainer" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.717809 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} err="failed to get container status \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": rpc error: code = NotFound desc = could not find container \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": container with ID starting with a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.717826 4760 scope.go:117] "RemoveContainer" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.718083 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} err="failed to get container status \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": rpc error: code = NotFound desc = could not find container \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": container with ID starting with 203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.718129 4760 scope.go:117] "RemoveContainer" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.718355 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} err="failed to get container status \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": rpc error: code = NotFound desc = could not find container \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": container with ID starting with ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.718426 4760 scope.go:117] "RemoveContainer" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.720826 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} err="failed to get container status \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": rpc error: code = NotFound desc = could not find container \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": container with ID starting with 5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.720844 4760 scope.go:117] "RemoveContainer" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.721380 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} err="failed to get container status \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": rpc error: code = NotFound desc = could not find container \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": container with ID starting with c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.721463 4760 scope.go:117] "RemoveContainer" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.722650 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} err="failed to get container status \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": rpc error: code = NotFound desc = could not find container \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": container with ID starting with 53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.722728 4760 scope.go:117] "RemoveContainer" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.723168 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} err="failed to get container status \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": rpc error: code = NotFound desc = could not find container \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": container with ID starting with b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.723209 4760 scope.go:117] "RemoveContainer" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.723456 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} err="failed to get container status \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": rpc error: code = NotFound desc = could not find container \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": container with ID starting with 498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.723549 4760 scope.go:117] "RemoveContainer" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.724267 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} err="failed to get container status \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": rpc error: code = NotFound desc = could not find container \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": container with ID starting with 105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.724306 4760 scope.go:117] "RemoveContainer" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.724589 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} err="failed to get container status \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": rpc error: code = NotFound desc = could not find container \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": container with ID starting with 5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.724658 4760 scope.go:117] "RemoveContainer" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.725178 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} err="failed to get container status \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": rpc error: code = NotFound desc = could not find container \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": container with ID starting with a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.725213 4760 scope.go:117] "RemoveContainer" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.725547 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} err="failed to get container status \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": rpc error: code = NotFound desc = could not find container \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": container with ID starting with 203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.725578 4760 scope.go:117] "RemoveContainer" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.725911 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} err="failed to get container status \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": rpc error: code = NotFound desc = could not find container \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": container with ID starting with ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.725976 4760 scope.go:117] "RemoveContainer" containerID="5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.726257 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1"} err="failed to get container status \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": rpc error: code = NotFound desc = could not find container \"5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1\": container with ID starting with 5e6f4ddfbdb7c7e60862f294260b31dd8701c67a79ab4f434ef01ca4a6d2ecc1 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.726277 4760 scope.go:117] "RemoveContainer" containerID="c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.726683 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396"} err="failed to get container status \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": rpc error: code = NotFound desc = could not find container \"c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396\": container with ID starting with c4d1390dfc87d160d81347520345be812094b6d08a67749306fd668998182396 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.726748 4760 scope.go:117] "RemoveContainer" containerID="53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.727120 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826"} err="failed to get container status \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": rpc error: code = NotFound desc = could not find container \"53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826\": container with ID starting with 53c1184e39f4dc63098d837ba3b1101e47f8466f8c959b011dc44060346b2826 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.727156 4760 scope.go:117] "RemoveContainer" containerID="b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.727468 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba"} err="failed to get container status \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": rpc error: code = NotFound desc = could not find container \"b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba\": container with ID starting with b21d88ccc913af4190c4e899fd79c5d611a2aa304a4dccd2ef10c0ad054543ba not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.727533 4760 scope.go:117] "RemoveContainer" containerID="498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.727849 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a"} err="failed to get container status \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": rpc error: code = NotFound desc = could not find container \"498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a\": container with ID starting with 498e352caae88cb800cee379758a1cc40a36ebbcec9224088ecd40dcbcc6527a not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.727868 4760 scope.go:117] "RemoveContainer" containerID="105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.728129 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d"} err="failed to get container status \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": rpc error: code = NotFound desc = could not find container \"105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d\": container with ID starting with 105145a40195c0bb607a4dd4a91b186259842049347de62c4a30aa963ac8d32d not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.728150 4760 scope.go:117] "RemoveContainer" containerID="5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.728465 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156"} err="failed to get container status \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": rpc error: code = NotFound desc = could not find container \"5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156\": container with ID starting with 5d9bf0de5479a43a43dda9734065812ad6dbe43f731014fc05cd136dadd09156 not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.728492 4760 scope.go:117] "RemoveContainer" containerID="a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.728830 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e"} err="failed to get container status \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": rpc error: code = NotFound desc = could not find container \"a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e\": container with ID starting with a373114752019b2b0f261e54cb21a7d909771353cd5e9816f8c685824b94e40e not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.728919 4760 scope.go:117] "RemoveContainer" containerID="203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.729257 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c"} err="failed to get container status \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": rpc error: code = NotFound desc = could not find container \"203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c\": container with ID starting with 203db08df4d7dc70baa546781f154480ee5e8424eec8079b48a263ff0a464b7c not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.729277 4760 scope.go:117] "RemoveContainer" containerID="ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.729566 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f"} err="failed to get container status \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": rpc error: code = NotFound desc = could not find container \"ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f\": container with ID starting with ee0a26c3b1f6d7b4dc601f6c79a1e140fa361226063725a6867a75978ab3d49f not found: ID does not exist" Dec 27 05:57:31 crc kubenswrapper[4760]: I1227 05:57:31.828199 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:31 crc kubenswrapper[4760]: W1227 05:57:31.846818 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80a33ee8_93b8_4418_aac2_0f651bfa431e.slice/crio-cc59112c47e464f278c8e2d61d98970ba52102ffae7c46231b3b758a180c8f27 WatchSource:0}: Error finding container cc59112c47e464f278c8e2d61d98970ba52102ffae7c46231b3b758a180c8f27: Status 404 returned error can't find the container with id cc59112c47e464f278c8e2d61d98970ba52102ffae7c46231b3b758a180c8f27 Dec 27 05:57:32 crc kubenswrapper[4760]: I1227 05:57:32.440946 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmk6w_b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d/kube-multus/0.log" Dec 27 05:57:32 crc kubenswrapper[4760]: I1227 05:57:32.441421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmk6w" event={"ID":"b1c1a9cc-4ace-451e-b8c6-ef9a14bc7a6d","Type":"ContainerStarted","Data":"4f52354fcdab236ea2fb911140fcb065a347b43a1bff61d7d1d894397b918bb1"} Dec 27 05:57:32 crc kubenswrapper[4760]: I1227 05:57:32.445256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"cc59112c47e464f278c8e2d61d98970ba52102ffae7c46231b3b758a180c8f27"} Dec 27 05:57:33 crc kubenswrapper[4760]: I1227 05:57:33.454579 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"2fafda71e1da4fd618decfaa84c7213a07324d1589e74b5dc78fc06d783885d1"} Dec 27 05:57:33 crc kubenswrapper[4760]: I1227 05:57:33.476353 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:33 crc kubenswrapper[4760]: I1227 05:57:33.476405 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:33 crc kubenswrapper[4760]: I1227 05:57:33.509521 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8aa557-e11a-4c40-9179-22811f44ff18" path="/var/lib/kubelet/pods/7c8aa557-e11a-4c40-9179-22811f44ff18/volumes" Dec 27 05:57:34 crc kubenswrapper[4760]: I1227 05:57:34.463009 4760 generic.go:334] "Generic (PLEG): container finished" podID="80a33ee8-93b8-4418-aac2-0f651bfa431e" containerID="2fafda71e1da4fd618decfaa84c7213a07324d1589e74b5dc78fc06d783885d1" exitCode=0 Dec 27 05:57:34 crc kubenswrapper[4760]: I1227 05:57:34.463163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerDied","Data":"2fafda71e1da4fd618decfaa84c7213a07324d1589e74b5dc78fc06d783885d1"} Dec 27 05:57:34 crc kubenswrapper[4760]: I1227 05:57:34.533874 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nbhlj" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="registry-server" probeResult="failure" output=< Dec 27 05:57:34 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Dec 27 05:57:34 crc kubenswrapper[4760]: > Dec 27 05:57:35 crc kubenswrapper[4760]: I1227 05:57:35.288327 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:57:35 crc kubenswrapper[4760]: I1227 05:57:35.288759 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:57:35 crc kubenswrapper[4760]: I1227 05:57:35.474020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"7664316fc0d8da036d3d3254cb2b462b2b6f6b9c7852bcd7c477097ab12309c0"} Dec 27 05:57:35 crc kubenswrapper[4760]: I1227 05:57:35.475499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"834bfcd2e508be90fd0cfc903479707710b4d790fcc7f630d880aef8c27e655a"} Dec 27 05:57:35 crc kubenswrapper[4760]: I1227 05:57:35.475545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"40127354b8813d5f4afbafcf27415d308a6101bef4d68d9afa77b338e926966f"} Dec 27 05:57:35 crc kubenswrapper[4760]: I1227 05:57:35.475570 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"3d3fc8037fe8b2528a6904ae265bb439a4d2c67c2fe6c7245171b022a3b98a7d"} Dec 27 05:57:36 crc kubenswrapper[4760]: I1227 05:57:36.483033 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"56bffb9c5d5cc51556be7b1e8196c638ffb3b86092cd275d1eb4665d87f38daa"} Dec 27 05:57:36 crc kubenswrapper[4760]: I1227 05:57:36.483075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"aac8620adebfffab25a784f657ac7979396eee0ef30c892a8c1acaae97235f84"} Dec 27 05:57:39 crc kubenswrapper[4760]: I1227 05:57:39.514752 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"8384098ca7fb650bb5927305b9bc4c554bc9ade3b48218d0434e30c643ed5757"} Dec 27 05:57:43 crc kubenswrapper[4760]: I1227 05:57:43.533611 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:43 crc kubenswrapper[4760]: I1227 05:57:43.601698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:43 crc kubenswrapper[4760]: I1227 05:57:43.786592 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nbhlj"] Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.551426 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" event={"ID":"80a33ee8-93b8-4418-aac2-0f651bfa431e","Type":"ContainerStarted","Data":"dc60fb06687235f7c229f25a8f843d5bf70396011ad169e61bdca0279119de8b"} Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.551982 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.552191 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.552382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.651477 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.652083 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:57:44 crc kubenswrapper[4760]: I1227 05:57:44.693643 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" podStartSLOduration=13.693627653 podStartE2EDuration="13.693627653s" podCreationTimestamp="2025-12-27 05:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:57:44.600370867 +0000 UTC m=+787.360440192" watchObservedRunningTime="2025-12-27 05:57:44.693627653 +0000 UTC m=+787.453696978" Dec 27 05:57:45 crc kubenswrapper[4760]: I1227 05:57:45.224689 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-qcdnn"] Dec 27 05:57:45 crc kubenswrapper[4760]: I1227 05:57:45.224826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:45 crc kubenswrapper[4760]: I1227 05:57:45.225313 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:45 crc kubenswrapper[4760]: E1227 05:57:45.255363 4760 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(1ed57f76570b4163d18d3457cefda1be9f45ecd56fc4ecc43625065fb6b86e07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 27 05:57:45 crc kubenswrapper[4760]: E1227 05:57:45.255461 4760 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(1ed57f76570b4163d18d3457cefda1be9f45ecd56fc4ecc43625065fb6b86e07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:45 crc kubenswrapper[4760]: E1227 05:57:45.255494 4760 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(1ed57f76570b4163d18d3457cefda1be9f45ecd56fc4ecc43625065fb6b86e07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:45 crc kubenswrapper[4760]: E1227 05:57:45.255567 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-6769fb99d-qcdnn_openshift-nmstate(cb7c9b2c-f35f-42bf-8419-5c0615323e3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-6769fb99d-qcdnn_openshift-nmstate(cb7c9b2c-f35f-42bf-8419-5c0615323e3a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-6769fb99d-qcdnn_openshift-nmstate_cb7c9b2c-f35f-42bf-8419-5c0615323e3a_0(1ed57f76570b4163d18d3457cefda1be9f45ecd56fc4ecc43625065fb6b86e07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" podUID="cb7c9b2c-f35f-42bf-8419-5c0615323e3a" Dec 27 05:57:45 crc kubenswrapper[4760]: I1227 05:57:45.555940 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nbhlj" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="registry-server" containerID="cri-o://64fe480911dcb1d63230fc25c285a2091c733b74f960e27749483ba49f5f3f71" gracePeriod=2 Dec 27 05:57:46 crc kubenswrapper[4760]: I1227 05:57:46.573600 4760 generic.go:334] "Generic (PLEG): container finished" podID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerID="64fe480911dcb1d63230fc25c285a2091c733b74f960e27749483ba49f5f3f71" exitCode=0 Dec 27 05:57:46 crc kubenswrapper[4760]: I1227 05:57:46.573709 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerDied","Data":"64fe480911dcb1d63230fc25c285a2091c733b74f960e27749483ba49f5f3f71"} Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.046270 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.242968 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkpwv\" (UniqueName: \"kubernetes.io/projected/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-kube-api-access-pkpwv\") pod \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.243037 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-catalog-content\") pod \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.243129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-utilities\") pod \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\" (UID: \"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51\") " Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.244960 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-utilities" (OuterVolumeSpecName: "utilities") pod "a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" (UID: "a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.248321 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-kube-api-access-pkpwv" (OuterVolumeSpecName: "kube-api-access-pkpwv") pod "a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" (UID: "a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51"). InnerVolumeSpecName "kube-api-access-pkpwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.345242 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkpwv\" (UniqueName: \"kubernetes.io/projected/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-kube-api-access-pkpwv\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.345282 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.388892 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" (UID: "a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.446515 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.583753 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nbhlj" event={"ID":"a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51","Type":"ContainerDied","Data":"5528e5d5884cbe6d499a055c0acc491b7ab816235a397bd7ffc48bcb33fc3f9c"} Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.583825 4760 scope.go:117] "RemoveContainer" containerID="64fe480911dcb1d63230fc25c285a2091c733b74f960e27749483ba49f5f3f71" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.583908 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nbhlj" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.613890 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nbhlj"] Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.620832 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nbhlj"] Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.624818 4760 scope.go:117] "RemoveContainer" containerID="ec14fe761e319f208650a2c21478593489a67a10f54fa0ba76a27e9b1d6bbe6f" Dec 27 05:57:47 crc kubenswrapper[4760]: I1227 05:57:47.663311 4760 scope.go:117] "RemoveContainer" containerID="7708b2234bb72afb71493364d39836c05bb2fed6fbb5d43d718f5528336a0f7e" Dec 27 05:57:49 crc kubenswrapper[4760]: I1227 05:57:49.509792 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" path="/var/lib/kubelet/pods/a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51/volumes" Dec 27 05:57:58 crc kubenswrapper[4760]: I1227 05:57:58.502356 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:58 crc kubenswrapper[4760]: I1227 05:57:58.503647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" Dec 27 05:57:58 crc kubenswrapper[4760]: I1227 05:57:58.934680 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-qcdnn"] Dec 27 05:57:59 crc kubenswrapper[4760]: I1227 05:57:59.679049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" event={"ID":"cb7c9b2c-f35f-42bf-8419-5c0615323e3a","Type":"ContainerStarted","Data":"d4178260bc636bd21f8fa8d40ae7540fd493c6608a10048332d7f4c8048379b0"} Dec 27 05:58:01 crc kubenswrapper[4760]: I1227 05:58:01.694678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" event={"ID":"cb7c9b2c-f35f-42bf-8419-5c0615323e3a","Type":"ContainerStarted","Data":"cf1afdac84e942f1f6c028de17a1bc66cbdf6bb16e3d94be65c9fb8ed7f2bb19"} Dec 27 05:58:01 crc kubenswrapper[4760]: I1227 05:58:01.723496 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-qcdnn" podStartSLOduration=28.847906761 podStartE2EDuration="30.723477914s" podCreationTimestamp="2025-12-27 05:57:31 +0000 UTC" firstStartedPulling="2025-12-27 05:57:58.951623812 +0000 UTC m=+801.711693167" lastFinishedPulling="2025-12-27 05:58:00.827195005 +0000 UTC m=+803.587264320" observedRunningTime="2025-12-27 05:58:01.720315247 +0000 UTC m=+804.480384572" watchObservedRunningTime="2025-12-27 05:58:01.723477914 +0000 UTC m=+804.483547229" Dec 27 05:58:01 crc kubenswrapper[4760]: I1227 05:58:01.865043 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5vn8h" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.830215 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl"] Dec 27 05:58:02 crc kubenswrapper[4760]: E1227 05:58:02.830740 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="registry-server" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.830769 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="registry-server" Dec 27 05:58:02 crc kubenswrapper[4760]: E1227 05:58:02.830787 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="extract-content" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.830943 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="extract-content" Dec 27 05:58:02 crc kubenswrapper[4760]: E1227 05:58:02.830966 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="extract-utilities" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.830978 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="extract-utilities" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.831298 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fb31e0-24dd-4dc7-ba9a-df27dd67ec51" containerName="registry-server" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.832481 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.846298 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gc2rb" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.857057 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl"] Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.881599 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm"] Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.882760 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.884418 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.901173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm"] Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.906521 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-78cl5"] Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.908645 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.966933 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw"] Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.967420 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8zx\" (UniqueName: \"kubernetes.io/projected/227bb17c-06ab-4906-8d95-b0146bf1868d-kube-api-access-5r8zx\") pod \"nmstate-metrics-7f7f7578db-fvcbl\" (UID: \"227bb17c-06ab-4906-8d95-b0146bf1868d\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.967667 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.969129 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.969214 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jddks" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.972718 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 27 05:58:02 crc kubenswrapper[4760]: I1227 05:58:02.976039 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw"] Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8c7w\" (UniqueName: \"kubernetes.io/projected/4f66b8d6-242e-42ea-959d-304f51532744-kube-api-access-n8c7w\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069164 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0666407d-d4c1-497a-ae83-518e6ba70085-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-ovs-socket\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6sp\" (UniqueName: \"kubernetes.io/projected/0666407d-d4c1-497a-ae83-518e6ba70085-kube-api-access-fv6sp\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-dbus-socket\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069295 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4f66b8d6-242e-42ea-959d-304f51532744-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069322 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-nmstate-lock\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0666407d-d4c1-497a-ae83-518e6ba70085-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmhm\" (UniqueName: \"kubernetes.io/projected/ac16e936-081f-470c-a4a9-480fae986f2e-kube-api-access-rfmhm\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.069414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8zx\" (UniqueName: \"kubernetes.io/projected/227bb17c-06ab-4906-8d95-b0146bf1868d-kube-api-access-5r8zx\") pod \"nmstate-metrics-7f7f7578db-fvcbl\" (UID: \"227bb17c-06ab-4906-8d95-b0146bf1868d\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.105291 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8zx\" (UniqueName: \"kubernetes.io/projected/227bb17c-06ab-4906-8d95-b0146bf1868d-kube-api-access-5r8zx\") pod \"nmstate-metrics-7f7f7578db-fvcbl\" (UID: \"227bb17c-06ab-4906-8d95-b0146bf1868d\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.165508 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f8bb584c5-skx47"] Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.165643 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.166148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8c7w\" (UniqueName: \"kubernetes.io/projected/4f66b8d6-242e-42ea-959d-304f51532744-kube-api-access-n8c7w\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0666407d-d4c1-497a-ae83-518e6ba70085-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-ovs-socket\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170239 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6sp\" (UniqueName: \"kubernetes.io/projected/0666407d-d4c1-497a-ae83-518e6ba70085-kube-api-access-fv6sp\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170256 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-dbus-socket\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170293 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4f66b8d6-242e-42ea-959d-304f51532744-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170314 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-nmstate-lock\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170335 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0666407d-d4c1-497a-ae83-518e6ba70085-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170358 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmhm\" (UniqueName: \"kubernetes.io/projected/ac16e936-081f-470c-a4a9-480fae986f2e-kube-api-access-rfmhm\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.170912 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-dbus-socket\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: E1227 05:58:03.170985 4760 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 27 05:58:03 crc kubenswrapper[4760]: E1227 05:58:03.171042 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f66b8d6-242e-42ea-959d-304f51532744-tls-key-pair podName:4f66b8d6-242e-42ea-959d-304f51532744 nodeName:}" failed. No retries permitted until 2025-12-27 05:58:03.671027672 +0000 UTC m=+806.431096987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4f66b8d6-242e-42ea-959d-304f51532744-tls-key-pair") pod "nmstate-webhook-f8fb84555-fbhrm" (UID: "4f66b8d6-242e-42ea-959d-304f51532744") : secret "openshift-nmstate-webhook" not found Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.171203 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-nmstate-lock\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: E1227 05:58:03.171255 4760 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 27 05:58:03 crc kubenswrapper[4760]: E1227 05:58:03.171277 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0666407d-d4c1-497a-ae83-518e6ba70085-plugin-serving-cert podName:0666407d-d4c1-497a-ae83-518e6ba70085 nodeName:}" failed. No retries permitted until 2025-12-27 05:58:03.671270428 +0000 UTC m=+806.431339743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0666407d-d4c1-497a-ae83-518e6ba70085-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-vn8tw" (UID: "0666407d-d4c1-497a-ae83-518e6ba70085") : secret "plugin-serving-cert" not found Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.171300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac16e936-081f-470c-a4a9-480fae986f2e-ovs-socket\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.172408 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0666407d-d4c1-497a-ae83-518e6ba70085-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.184399 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f8bb584c5-skx47"] Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.212479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6sp\" (UniqueName: \"kubernetes.io/projected/0666407d-d4c1-497a-ae83-518e6ba70085-kube-api-access-fv6sp\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.216359 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8c7w\" (UniqueName: \"kubernetes.io/projected/4f66b8d6-242e-42ea-959d-304f51532744-kube-api-access-n8c7w\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.221753 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmhm\" (UniqueName: \"kubernetes.io/projected/ac16e936-081f-470c-a4a9-480fae986f2e-kube-api-access-rfmhm\") pod \"nmstate-handler-78cl5\" (UID: \"ac16e936-081f-470c-a4a9-480fae986f2e\") " pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.229705 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-oauth-serving-cert\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-service-ca\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272336 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-serving-cert\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvcw\" (UniqueName: \"kubernetes.io/projected/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-kube-api-access-drvcw\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272376 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-config\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-trusted-ca-bundle\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.272432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-oauth-config\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-trusted-ca-bundle\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-oauth-config\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373765 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-oauth-serving-cert\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-serving-cert\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-service-ca\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373849 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvcw\" (UniqueName: \"kubernetes.io/projected/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-kube-api-access-drvcw\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.373873 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-config\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.374679 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-config\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.375311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-trusted-ca-bundle\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.375679 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-service-ca\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.375924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-oauth-serving-cert\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.378538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-serving-cert\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.379805 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-console-oauth-config\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.392621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvcw\" (UniqueName: \"kubernetes.io/projected/cb3de01a-5bae-4f46-8286-d3a771e9b9e4-kube-api-access-drvcw\") pod \"console-7f8bb584c5-skx47\" (UID: \"cb3de01a-5bae-4f46-8286-d3a771e9b9e4\") " pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.403717 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl"] Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.483150 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.678492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4f66b8d6-242e-42ea-959d-304f51532744-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.678865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0666407d-d4c1-497a-ae83-518e6ba70085-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.685360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0666407d-d4c1-497a-ae83-518e6ba70085-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-vn8tw\" (UID: \"0666407d-d4c1-497a-ae83-518e6ba70085\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.687801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4f66b8d6-242e-42ea-959d-304f51532744-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fbhrm\" (UID: \"4f66b8d6-242e-42ea-959d-304f51532744\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.696965 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f8bb584c5-skx47"] Dec 27 05:58:03 crc kubenswrapper[4760]: W1227 05:58:03.704044 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb3de01a_5bae_4f46_8286_d3a771e9b9e4.slice/crio-71f7e60d109703a3cdadd6c79c70d0cb8ba2dbe79ddea34b5298635cbd103a5b WatchSource:0}: Error finding container 71f7e60d109703a3cdadd6c79c70d0cb8ba2dbe79ddea34b5298635cbd103a5b: Status 404 returned error can't find the container with id 71f7e60d109703a3cdadd6c79c70d0cb8ba2dbe79ddea34b5298635cbd103a5b Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.706083 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" event={"ID":"227bb17c-06ab-4906-8d95-b0146bf1868d","Type":"ContainerStarted","Data":"85d1e96c619e2b99ab5c9e66435a18a9f297955bea3f2e62dd94abf4ed6f27c2"} Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.708363 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-78cl5" event={"ID":"ac16e936-081f-470c-a4a9-480fae986f2e","Type":"ContainerStarted","Data":"147d0261a93796c3ab7b099bad4c7b97ff91a806cc73d94c6fd5a037ff6f40f8"} Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.810680 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:03 crc kubenswrapper[4760]: I1227 05:58:03.885807 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.069627 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm"] Dec 27 05:58:04 crc kubenswrapper[4760]: W1227 05:58:04.080561 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f66b8d6_242e_42ea_959d_304f51532744.slice/crio-66375799bd13ab0a231022e42b79a08245ac5303ebc901b8fbba0fc8443036ad WatchSource:0}: Error finding container 66375799bd13ab0a231022e42b79a08245ac5303ebc901b8fbba0fc8443036ad: Status 404 returned error can't find the container with id 66375799bd13ab0a231022e42b79a08245ac5303ebc901b8fbba0fc8443036ad Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.131912 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw"] Dec 27 05:58:04 crc kubenswrapper[4760]: W1227 05:58:04.137402 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0666407d_d4c1_497a_ae83_518e6ba70085.slice/crio-07ee77070023ab3378fb851c4e0c6316e5a45bb05ab72f6ac796b0cf58edb490 WatchSource:0}: Error finding container 07ee77070023ab3378fb851c4e0c6316e5a45bb05ab72f6ac796b0cf58edb490: Status 404 returned error can't find the container with id 07ee77070023ab3378fb851c4e0c6316e5a45bb05ab72f6ac796b0cf58edb490 Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.715746 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" event={"ID":"4f66b8d6-242e-42ea-959d-304f51532744","Type":"ContainerStarted","Data":"66375799bd13ab0a231022e42b79a08245ac5303ebc901b8fbba0fc8443036ad"} Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.718414 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f8bb584c5-skx47" event={"ID":"cb3de01a-5bae-4f46-8286-d3a771e9b9e4","Type":"ContainerStarted","Data":"c3d18c6f258f0d95d79b9aa6494a03d0094bf5e21a7081afa2fe77b68256b551"} Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.718442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f8bb584c5-skx47" event={"ID":"cb3de01a-5bae-4f46-8286-d3a771e9b9e4","Type":"ContainerStarted","Data":"71f7e60d109703a3cdadd6c79c70d0cb8ba2dbe79ddea34b5298635cbd103a5b"} Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.720159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" event={"ID":"0666407d-d4c1-497a-ae83-518e6ba70085","Type":"ContainerStarted","Data":"07ee77070023ab3378fb851c4e0c6316e5a45bb05ab72f6ac796b0cf58edb490"} Dec 27 05:58:04 crc kubenswrapper[4760]: I1227 05:58:04.741008 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f8bb584c5-skx47" podStartSLOduration=1.740991911 podStartE2EDuration="1.740991911s" podCreationTimestamp="2025-12-27 05:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 05:58:04.738392068 +0000 UTC m=+807.498461383" watchObservedRunningTime="2025-12-27 05:58:04.740991911 +0000 UTC m=+807.501061236" Dec 27 05:58:05 crc kubenswrapper[4760]: I1227 05:58:05.287636 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:58:05 crc kubenswrapper[4760]: I1227 05:58:05.288083 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:58:06 crc kubenswrapper[4760]: I1227 05:58:06.736738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" event={"ID":"227bb17c-06ab-4906-8d95-b0146bf1868d","Type":"ContainerStarted","Data":"62af887ab0a93722566b482475e01619070bccae410b6a8395ed63c02959c12b"} Dec 27 05:58:06 crc kubenswrapper[4760]: I1227 05:58:06.738569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-78cl5" event={"ID":"ac16e936-081f-470c-a4a9-480fae986f2e","Type":"ContainerStarted","Data":"01aee22322480b0844433b6933d122cb1ee952a69c8303f57857b73eb64c4e0a"} Dec 27 05:58:06 crc kubenswrapper[4760]: I1227 05:58:06.739650 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" event={"ID":"4f66b8d6-242e-42ea-959d-304f51532744","Type":"ContainerStarted","Data":"312bc2a3a5428184550b68c560961a1bb546a583af94ee1d0fd7de8cabb396ee"} Dec 27 05:58:06 crc kubenswrapper[4760]: I1227 05:58:06.739953 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:06 crc kubenswrapper[4760]: I1227 05:58:06.759417 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" podStartSLOduration=2.571009403 podStartE2EDuration="4.759398418s" podCreationTimestamp="2025-12-27 05:58:02 +0000 UTC" firstStartedPulling="2025-12-27 05:58:04.085918377 +0000 UTC m=+806.845987692" lastFinishedPulling="2025-12-27 05:58:06.274307392 +0000 UTC m=+809.034376707" observedRunningTime="2025-12-27 05:58:06.753037082 +0000 UTC m=+809.513106387" watchObservedRunningTime="2025-12-27 05:58:06.759398418 +0000 UTC m=+809.519467733" Dec 27 05:58:07 crc kubenswrapper[4760]: I1227 05:58:07.746446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" event={"ID":"0666407d-d4c1-497a-ae83-518e6ba70085","Type":"ContainerStarted","Data":"9ab0fd4c6bd31a1d3837a7ab0d19a551f3a77dc4287bb7e5b36b8afd68954248"} Dec 27 05:58:07 crc kubenswrapper[4760]: I1227 05:58:07.746807 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:07 crc kubenswrapper[4760]: I1227 05:58:07.768993 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-vn8tw" podStartSLOduration=2.583516311 podStartE2EDuration="5.768973486s" podCreationTimestamp="2025-12-27 05:58:02 +0000 UTC" firstStartedPulling="2025-12-27 05:58:04.140301251 +0000 UTC m=+806.900370566" lastFinishedPulling="2025-12-27 05:58:07.325758426 +0000 UTC m=+810.085827741" observedRunningTime="2025-12-27 05:58:07.764015574 +0000 UTC m=+810.524084889" watchObservedRunningTime="2025-12-27 05:58:07.768973486 +0000 UTC m=+810.529042791" Dec 27 05:58:07 crc kubenswrapper[4760]: I1227 05:58:07.778582 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-78cl5" podStartSLOduration=2.768448665 podStartE2EDuration="5.778563651s" podCreationTimestamp="2025-12-27 05:58:02 +0000 UTC" firstStartedPulling="2025-12-27 05:58:03.252578692 +0000 UTC m=+806.012648017" lastFinishedPulling="2025-12-27 05:58:06.262693638 +0000 UTC m=+809.022763003" observedRunningTime="2025-12-27 05:58:07.776300195 +0000 UTC m=+810.536369520" watchObservedRunningTime="2025-12-27 05:58:07.778563651 +0000 UTC m=+810.538632966" Dec 27 05:58:08 crc kubenswrapper[4760]: I1227 05:58:08.756345 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" event={"ID":"227bb17c-06ab-4906-8d95-b0146bf1868d","Type":"ContainerStarted","Data":"4a09fd4b2e78460f4749963a9f98c05f56ade9b5d6c3c901d608112155da19a2"} Dec 27 05:58:08 crc kubenswrapper[4760]: I1227 05:58:08.781941 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-fvcbl" podStartSLOduration=1.628621884 podStartE2EDuration="6.781915865s" podCreationTimestamp="2025-12-27 05:58:02 +0000 UTC" firstStartedPulling="2025-12-27 05:58:03.410010683 +0000 UTC m=+806.170079998" lastFinishedPulling="2025-12-27 05:58:08.563304654 +0000 UTC m=+811.323373979" observedRunningTime="2025-12-27 05:58:08.777868837 +0000 UTC m=+811.537938152" watchObservedRunningTime="2025-12-27 05:58:08.781915865 +0000 UTC m=+811.541985190" Dec 27 05:58:13 crc kubenswrapper[4760]: I1227 05:58:13.268869 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-78cl5" Dec 27 05:58:13 crc kubenswrapper[4760]: I1227 05:58:13.484354 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:13 crc kubenswrapper[4760]: I1227 05:58:13.484798 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:13 crc kubenswrapper[4760]: I1227 05:58:13.492505 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:13 crc kubenswrapper[4760]: I1227 05:58:13.793867 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f8bb584c5-skx47" Dec 27 05:58:13 crc kubenswrapper[4760]: I1227 05:58:13.866978 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cmmr9"] Dec 27 05:58:23 crc kubenswrapper[4760]: I1227 05:58:23.818559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fbhrm" Dec 27 05:58:35 crc kubenswrapper[4760]: I1227 05:58:35.287512 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 05:58:35 crc kubenswrapper[4760]: I1227 05:58:35.288168 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 05:58:35 crc kubenswrapper[4760]: I1227 05:58:35.288233 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 05:58:35 crc kubenswrapper[4760]: I1227 05:58:35.289539 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bdfa6927c42e93e91cee65f61297025ba0cd7530f28a85f4fd103174d5484b4"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 05:58:35 crc kubenswrapper[4760]: I1227 05:58:35.289607 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://8bdfa6927c42e93e91cee65f61297025ba0cd7530f28a85f4fd103174d5484b4" gracePeriod=600 Dec 27 05:58:38 crc kubenswrapper[4760]: I1227 05:58:38.932368 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cmmr9" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" containerID="cri-o://bf3ae9a2e49c7495b74bcddb8a514a18b07674abab54acbf538bdff1fcc49dc4" gracePeriod=15 Dec 27 05:58:40 crc kubenswrapper[4760]: I1227 05:58:40.896738 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x6kjl"] Dec 27 05:58:40 crc kubenswrapper[4760]: I1227 05:58:40.901284 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:40 crc kubenswrapper[4760]: I1227 05:58:40.907411 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6kjl"] Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.067885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd392d2-7dda-4bed-bdf2-368682cb3c71-utilities\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.067933 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd392d2-7dda-4bed-bdf2-368682cb3c71-catalog-content\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.068025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffb79\" (UniqueName: \"kubernetes.io/projected/ddd392d2-7dda-4bed-bdf2-368682cb3c71-kube-api-access-ffb79\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.169604 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffb79\" (UniqueName: \"kubernetes.io/projected/ddd392d2-7dda-4bed-bdf2-368682cb3c71-kube-api-access-ffb79\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.169681 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd392d2-7dda-4bed-bdf2-368682cb3c71-utilities\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.169708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd392d2-7dda-4bed-bdf2-368682cb3c71-catalog-content\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.170250 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd392d2-7dda-4bed-bdf2-368682cb3c71-catalog-content\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.170818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd392d2-7dda-4bed-bdf2-368682cb3c71-utilities\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.189300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffb79\" (UniqueName: \"kubernetes.io/projected/ddd392d2-7dda-4bed-bdf2-368682cb3c71-kube-api-access-ffb79\") pod \"community-operators-x6kjl\" (UID: \"ddd392d2-7dda-4bed-bdf2-368682cb3c71\") " pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.281883 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:58:41 crc kubenswrapper[4760]: I1227 05:58:41.533542 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6kjl"] Dec 27 05:58:41 crc kubenswrapper[4760]: W1227 05:58:41.538841 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd392d2_7dda_4bed_bdf2_368682cb3c71.slice/crio-776a955ba4db285b61677ce49b133d380b44d30114e157f912cb77ac78117a44 WatchSource:0}: Error finding container 776a955ba4db285b61677ce49b133d380b44d30114e157f912cb77ac78117a44: Status 404 returned error can't find the container with id 776a955ba4db285b61677ce49b133d380b44d30114e157f912cb77ac78117a44 Dec 27 05:58:42 crc kubenswrapper[4760]: I1227 05:58:42.514602 4760 patch_prober.go:28] interesting pod/console-f9d7485db-cmmr9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 27 05:58:42 crc kubenswrapper[4760]: I1227 05:58:42.515042 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-cmmr9" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 27 05:58:42 crc kubenswrapper[4760]: I1227 05:58:42.668960 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="8bdfa6927c42e93e91cee65f61297025ba0cd7530f28a85f4fd103174d5484b4" exitCode=0 Dec 27 05:58:42 crc kubenswrapper[4760]: I1227 05:58:42.669034 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"8bdfa6927c42e93e91cee65f61297025ba0cd7530f28a85f4fd103174d5484b4"} Dec 27 05:58:42 crc kubenswrapper[4760]: I1227 05:58:42.669084 4760 scope.go:117] "RemoveContainer" containerID="6c09ce728c7de3887001e96cbc48ccd41a2a698cb90d9eedf6f87ae94181d9e6" Dec 27 05:58:43 crc kubenswrapper[4760]: I1227 05:58:43.678347 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cmmr9_3dfa4237-e979-4215-9f2c-20aa6303cae7/console/0.log" Dec 27 05:58:43 crc kubenswrapper[4760]: I1227 05:58:43.678741 4760 generic.go:334] "Generic (PLEG): container finished" podID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerID="bf3ae9a2e49c7495b74bcddb8a514a18b07674abab54acbf538bdff1fcc49dc4" exitCode=2 Dec 27 05:58:43 crc kubenswrapper[4760]: I1227 05:58:43.678851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cmmr9" event={"ID":"3dfa4237-e979-4215-9f2c-20aa6303cae7","Type":"ContainerDied","Data":"bf3ae9a2e49c7495b74bcddb8a514a18b07674abab54acbf538bdff1fcc49dc4"} Dec 27 05:58:43 crc kubenswrapper[4760]: I1227 05:58:43.680898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6kjl" event={"ID":"ddd392d2-7dda-4bed-bdf2-368682cb3c71","Type":"ContainerStarted","Data":"776a955ba4db285b61677ce49b133d380b44d30114e157f912cb77ac78117a44"} Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.341800 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cmmr9_3dfa4237-e979-4215-9f2c-20aa6303cae7/console/0.log" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.342280 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.515829 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-oauth-serving-cert\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.516435 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-oauth-config\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.516476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.516572 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-config\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.516607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.516647 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-serving-cert\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.516725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws2z9\" (UniqueName: \"kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9\") pod \"3dfa4237-e979-4215-9f2c-20aa6303cae7\" (UID: \"3dfa4237-e979-4215-9f2c-20aa6303cae7\") " Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.519673 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca" (OuterVolumeSpecName: "service-ca") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.520417 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.521270 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-config" (OuterVolumeSpecName: "console-config") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.522341 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.535932 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.536008 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9" (OuterVolumeSpecName: "kube-api-access-ws2z9") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "kube-api-access-ws2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.536347 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3dfa4237-e979-4215-9f2c-20aa6303cae7" (UID: "3dfa4237-e979-4215-9f2c-20aa6303cae7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618139 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618188 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-service-ca\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618209 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-config\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618226 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618246 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa4237-e979-4215-9f2c-20aa6303cae7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618263 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws2z9\" (UniqueName: \"kubernetes.io/projected/3dfa4237-e979-4215-9f2c-20aa6303cae7-kube-api-access-ws2z9\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.618282 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3dfa4237-e979-4215-9f2c-20aa6303cae7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.690505 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddd392d2-7dda-4bed-bdf2-368682cb3c71" containerID="f621967ce217ffffab175729c462a3252a00c6d610492b9602a3efe66edba87f" exitCode=0 Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.690602 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6kjl" event={"ID":"ddd392d2-7dda-4bed-bdf2-368682cb3c71","Type":"ContainerDied","Data":"f621967ce217ffffab175729c462a3252a00c6d610492b9602a3efe66edba87f"} Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.693976 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"250ddace2814c860fabbd9e2871a90feb8c2c9cfa534ed4a183058d24b4ec76d"} Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.696066 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cmmr9_3dfa4237-e979-4215-9f2c-20aa6303cae7/console/0.log" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.696117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cmmr9" event={"ID":"3dfa4237-e979-4215-9f2c-20aa6303cae7","Type":"ContainerDied","Data":"6c636926c24158d747f9df0547019b5f12ef61ec7a98d711c7c411a726b49c11"} Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.696146 4760 scope.go:117] "RemoveContainer" containerID="bf3ae9a2e49c7495b74bcddb8a514a18b07674abab54acbf538bdff1fcc49dc4" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.696218 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cmmr9" Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.752450 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cmmr9"] Dec 27 05:58:44 crc kubenswrapper[4760]: I1227 05:58:44.759176 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cmmr9"] Dec 27 05:58:45 crc kubenswrapper[4760]: I1227 05:58:45.739502 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" path="/var/lib/kubelet/pods/3dfa4237-e979-4215-9f2c-20aa6303cae7/volumes" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.076682 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld"] Dec 27 05:58:48 crc kubenswrapper[4760]: E1227 05:58:48.077516 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.077531 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.077675 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfa4237-e979-4215-9f2c-20aa6303cae7" containerName="console" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.078541 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.081173 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.092761 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld"] Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.173724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.174159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.174328 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grwzs\" (UniqueName: \"kubernetes.io/projected/9015155d-bc63-47e7-8f74-ccf0e122b05f-kube-api-access-grwzs\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.275891 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grwzs\" (UniqueName: \"kubernetes.io/projected/9015155d-bc63-47e7-8f74-ccf0e122b05f-kube-api-access-grwzs\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.275987 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.276073 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.276926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.276997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.301296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grwzs\" (UniqueName: \"kubernetes.io/projected/9015155d-bc63-47e7-8f74-ccf0e122b05f-kube-api-access-grwzs\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.496561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:58:48 crc kubenswrapper[4760]: I1227 05:58:48.751708 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld"] Dec 27 05:58:50 crc kubenswrapper[4760]: I1227 05:58:50.738231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" event={"ID":"9015155d-bc63-47e7-8f74-ccf0e122b05f","Type":"ContainerStarted","Data":"c15ea8e5c68063c56bfaf6591de97ef97317b3978b8e52e7af98d36e57ebbe03"} Dec 27 05:58:50 crc kubenswrapper[4760]: I1227 05:58:50.738957 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" event={"ID":"9015155d-bc63-47e7-8f74-ccf0e122b05f","Type":"ContainerStarted","Data":"16a1a9888b1e7cad18c49c49048e8fe5724b77b561078c7e5519d88fad79cd94"} Dec 27 05:58:51 crc kubenswrapper[4760]: I1227 05:58:51.747743 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6kjl" event={"ID":"ddd392d2-7dda-4bed-bdf2-368682cb3c71","Type":"ContainerStarted","Data":"62d46b64176c32d083a8294a088f9567494878b3915b7e1239096ebfea4a8bb6"} Dec 27 05:58:51 crc kubenswrapper[4760]: I1227 05:58:51.749739 4760 generic.go:334] "Generic (PLEG): container finished" podID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerID="c15ea8e5c68063c56bfaf6591de97ef97317b3978b8e52e7af98d36e57ebbe03" exitCode=0 Dec 27 05:58:51 crc kubenswrapper[4760]: I1227 05:58:51.749788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" event={"ID":"9015155d-bc63-47e7-8f74-ccf0e122b05f","Type":"ContainerDied","Data":"c15ea8e5c68063c56bfaf6591de97ef97317b3978b8e52e7af98d36e57ebbe03"} Dec 27 05:58:52 crc kubenswrapper[4760]: I1227 05:58:52.761288 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddd392d2-7dda-4bed-bdf2-368682cb3c71" containerID="62d46b64176c32d083a8294a088f9567494878b3915b7e1239096ebfea4a8bb6" exitCode=0 Dec 27 05:58:52 crc kubenswrapper[4760]: I1227 05:58:52.761429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6kjl" event={"ID":"ddd392d2-7dda-4bed-bdf2-368682cb3c71","Type":"ContainerDied","Data":"62d46b64176c32d083a8294a088f9567494878b3915b7e1239096ebfea4a8bb6"} Dec 27 05:59:00 crc kubenswrapper[4760]: I1227 05:59:00.814559 4760 generic.go:334] "Generic (PLEG): container finished" podID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerID="097914c518c4c9b594480c176aea940000e4bfd3c46b573c504ddcbc3988f5dd" exitCode=0 Dec 27 05:59:00 crc kubenswrapper[4760]: I1227 05:59:00.814668 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" event={"ID":"9015155d-bc63-47e7-8f74-ccf0e122b05f","Type":"ContainerDied","Data":"097914c518c4c9b594480c176aea940000e4bfd3c46b573c504ddcbc3988f5dd"} Dec 27 05:59:00 crc kubenswrapper[4760]: I1227 05:59:00.819358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x6kjl" event={"ID":"ddd392d2-7dda-4bed-bdf2-368682cb3c71","Type":"ContainerStarted","Data":"1188a29e79197b10bacac6ad1aa9f74be1d3a5bf9b3a09507316369ef7ea1efe"} Dec 27 05:59:00 crc kubenswrapper[4760]: I1227 05:59:00.873815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x6kjl" podStartSLOduration=6.714616375 podStartE2EDuration="20.873796405s" podCreationTimestamp="2025-12-27 05:58:40 +0000 UTC" firstStartedPulling="2025-12-27 05:58:44.695407879 +0000 UTC m=+847.455477194" lastFinishedPulling="2025-12-27 05:58:58.854587909 +0000 UTC m=+861.614657224" observedRunningTime="2025-12-27 05:59:00.868118916 +0000 UTC m=+863.628188251" watchObservedRunningTime="2025-12-27 05:59:00.873796405 +0000 UTC m=+863.633865720" Dec 27 05:59:01 crc kubenswrapper[4760]: I1227 05:59:01.282815 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:59:01 crc kubenswrapper[4760]: I1227 05:59:01.282867 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:59:01 crc kubenswrapper[4760]: I1227 05:59:01.832714 4760 generic.go:334] "Generic (PLEG): container finished" podID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerID="eb5aca16bab1b712efc444d6a6565f43d306388d23db68e9fba4b8155637d89d" exitCode=0 Dec 27 05:59:01 crc kubenswrapper[4760]: I1227 05:59:01.832782 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" event={"ID":"9015155d-bc63-47e7-8f74-ccf0e122b05f","Type":"ContainerDied","Data":"eb5aca16bab1b712efc444d6a6565f43d306388d23db68e9fba4b8155637d89d"} Dec 27 05:59:02 crc kubenswrapper[4760]: I1227 05:59:02.316705 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x6kjl" podUID="ddd392d2-7dda-4bed-bdf2-368682cb3c71" containerName="registry-server" probeResult="failure" output=< Dec 27 05:59:02 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Dec 27 05:59:02 crc kubenswrapper[4760]: > Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.139982 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.286546 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-util\") pod \"9015155d-bc63-47e7-8f74-ccf0e122b05f\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.286762 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwzs\" (UniqueName: \"kubernetes.io/projected/9015155d-bc63-47e7-8f74-ccf0e122b05f-kube-api-access-grwzs\") pod \"9015155d-bc63-47e7-8f74-ccf0e122b05f\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.287146 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-bundle\") pod \"9015155d-bc63-47e7-8f74-ccf0e122b05f\" (UID: \"9015155d-bc63-47e7-8f74-ccf0e122b05f\") " Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.288037 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-bundle" (OuterVolumeSpecName: "bundle") pod "9015155d-bc63-47e7-8f74-ccf0e122b05f" (UID: "9015155d-bc63-47e7-8f74-ccf0e122b05f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.295392 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9015155d-bc63-47e7-8f74-ccf0e122b05f-kube-api-access-grwzs" (OuterVolumeSpecName: "kube-api-access-grwzs") pod "9015155d-bc63-47e7-8f74-ccf0e122b05f" (UID: "9015155d-bc63-47e7-8f74-ccf0e122b05f"). InnerVolumeSpecName "kube-api-access-grwzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.307597 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-util" (OuterVolumeSpecName: "util") pod "9015155d-bc63-47e7-8f74-ccf0e122b05f" (UID: "9015155d-bc63-47e7-8f74-ccf0e122b05f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.388792 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.388837 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9015155d-bc63-47e7-8f74-ccf0e122b05f-util\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.388851 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grwzs\" (UniqueName: \"kubernetes.io/projected/9015155d-bc63-47e7-8f74-ccf0e122b05f-kube-api-access-grwzs\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.849854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" event={"ID":"9015155d-bc63-47e7-8f74-ccf0e122b05f","Type":"ContainerDied","Data":"16a1a9888b1e7cad18c49c49048e8fe5724b77b561078c7e5519d88fad79cd94"} Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.849914 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16a1a9888b1e7cad18c49c49048e8fe5724b77b561078c7e5519d88fad79cd94" Dec 27 05:59:03 crc kubenswrapper[4760]: I1227 05:59:03.849987 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld" Dec 27 05:59:11 crc kubenswrapper[4760]: I1227 05:59:11.345353 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:59:11 crc kubenswrapper[4760]: I1227 05:59:11.402473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x6kjl" Dec 27 05:59:13 crc kubenswrapper[4760]: I1227 05:59:13.388858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x6kjl"] Dec 27 05:59:13 crc kubenswrapper[4760]: I1227 05:59:13.735303 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjlcr"] Dec 27 05:59:13 crc kubenswrapper[4760]: I1227 05:59:13.735551 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sjlcr" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="registry-server" containerID="cri-o://4732e962a6c9ea98e2cd1480a7c6ae2c776c620700221dd31d4a07b36f0ab8c9" gracePeriod=2 Dec 27 05:59:16 crc kubenswrapper[4760]: I1227 05:59:16.856325 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5657944b46-gqznd"] Dec 27 05:59:16 crc kubenswrapper[4760]: E1227 05:59:16.856907 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="util" Dec 27 05:59:16 crc kubenswrapper[4760]: I1227 05:59:16.856918 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="util" Dec 27 05:59:16 crc kubenswrapper[4760]: E1227 05:59:16.856930 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="extract" Dec 27 05:59:16 crc kubenswrapper[4760]: I1227 05:59:16.856936 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="extract" Dec 27 05:59:16 crc kubenswrapper[4760]: E1227 05:59:16.856953 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="pull" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.856960 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="pull" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.857055 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9015155d-bc63-47e7-8f74-ccf0e122b05f" containerName="extract" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.857539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.877205 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.877406 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.877574 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tb67r" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.877701 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.890433 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.900441 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5657944b46-gqznd"] Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.928954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc9fca43-72d1-4ee1-b32e-dda6d593659d-apiservice-cert\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.929028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc9fca43-72d1-4ee1-b32e-dda6d593659d-webhook-cert\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.929051 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkc7v\" (UniqueName: \"kubernetes.io/projected/cc9fca43-72d1-4ee1-b32e-dda6d593659d-kube-api-access-nkc7v\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.948388 4760 generic.go:334] "Generic (PLEG): container finished" podID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerID="4732e962a6c9ea98e2cd1480a7c6ae2c776c620700221dd31d4a07b36f0ab8c9" exitCode=0 Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:16.948427 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjlcr" event={"ID":"d88183c3-8274-4fc3-85b8-5f1f4e15a77b","Type":"ContainerDied","Data":"4732e962a6c9ea98e2cd1480a7c6ae2c776c620700221dd31d4a07b36f0ab8c9"} Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.029782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc9fca43-72d1-4ee1-b32e-dda6d593659d-apiservice-cert\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.029834 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc9fca43-72d1-4ee1-b32e-dda6d593659d-webhook-cert\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.029854 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkc7v\" (UniqueName: \"kubernetes.io/projected/cc9fca43-72d1-4ee1-b32e-dda6d593659d-kube-api-access-nkc7v\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.035266 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc9fca43-72d1-4ee1-b32e-dda6d593659d-webhook-cert\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.049845 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkc7v\" (UniqueName: \"kubernetes.io/projected/cc9fca43-72d1-4ee1-b32e-dda6d593659d-kube-api-access-nkc7v\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.050133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc9fca43-72d1-4ee1-b32e-dda6d593659d-apiservice-cert\") pod \"metallb-operator-controller-manager-5657944b46-gqznd\" (UID: \"cc9fca43-72d1-4ee1-b32e-dda6d593659d\") " pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.174264 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.195343 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6"] Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.196186 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.201438 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.201497 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.201501 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kgq7g" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.216137 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6"] Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.232957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86q92\" (UniqueName: \"kubernetes.io/projected/a365d3ec-e794-4f15-9e95-1c502f25d860-kube-api-access-86q92\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.233030 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a365d3ec-e794-4f15-9e95-1c502f25d860-apiservice-cert\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.233052 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a365d3ec-e794-4f15-9e95-1c502f25d860-webhook-cert\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.334080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86q92\" (UniqueName: \"kubernetes.io/projected/a365d3ec-e794-4f15-9e95-1c502f25d860-kube-api-access-86q92\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.334452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a365d3ec-e794-4f15-9e95-1c502f25d860-apiservice-cert\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.334475 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a365d3ec-e794-4f15-9e95-1c502f25d860-webhook-cert\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.350384 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a365d3ec-e794-4f15-9e95-1c502f25d860-apiservice-cert\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.353778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a365d3ec-e794-4f15-9e95-1c502f25d860-webhook-cert\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.354020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86q92\" (UniqueName: \"kubernetes.io/projected/a365d3ec-e794-4f15-9e95-1c502f25d860-kube-api-access-86q92\") pod \"metallb-operator-webhook-server-797b4b66d5-rcqt6\" (UID: \"a365d3ec-e794-4f15-9e95-1c502f25d860\") " pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.544789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.661920 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5657944b46-gqznd"] Dec 27 05:59:17 crc kubenswrapper[4760]: W1227 05:59:17.677113 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc9fca43_72d1_4ee1_b32e_dda6d593659d.slice/crio-f62b3479278f159af5c4cbc38c498d2061223eeda96eb4174290fdb9d8174685 WatchSource:0}: Error finding container f62b3479278f159af5c4cbc38c498d2061223eeda96eb4174290fdb9d8174685: Status 404 returned error can't find the container with id f62b3479278f159af5c4cbc38c498d2061223eeda96eb4174290fdb9d8174685 Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.827859 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6"] Dec 27 05:59:17 crc kubenswrapper[4760]: W1227 05:59:17.836218 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda365d3ec_e794_4f15_9e95_1c502f25d860.slice/crio-713da2a783ad227e88bb6710ef9f49c88ab4aa6c3b624140d7469dc6ae945bcf WatchSource:0}: Error finding container 713da2a783ad227e88bb6710ef9f49c88ab4aa6c3b624140d7469dc6ae945bcf: Status 404 returned error can't find the container with id 713da2a783ad227e88bb6710ef9f49c88ab4aa6c3b624140d7469dc6ae945bcf Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.968574 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" event={"ID":"a365d3ec-e794-4f15-9e95-1c502f25d860","Type":"ContainerStarted","Data":"713da2a783ad227e88bb6710ef9f49c88ab4aa6c3b624140d7469dc6ae945bcf"} Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.970498 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjlcr" event={"ID":"d88183c3-8274-4fc3-85b8-5f1f4e15a77b","Type":"ContainerDied","Data":"8171423e4969219ddbd5520acaa25b12a872773eb3ff91ded37c899525451dda"} Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.970515 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8171423e4969219ddbd5520acaa25b12a872773eb3ff91ded37c899525451dda" Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.971852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" event={"ID":"cc9fca43-72d1-4ee1-b32e-dda6d593659d","Type":"ContainerStarted","Data":"f62b3479278f159af5c4cbc38c498d2061223eeda96eb4174290fdb9d8174685"} Dec 27 05:59:17 crc kubenswrapper[4760]: I1227 05:59:17.972332 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.046465 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-utilities\") pod \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.046820 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52gjp\" (UniqueName: \"kubernetes.io/projected/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-kube-api-access-52gjp\") pod \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.046980 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-catalog-content\") pod \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\" (UID: \"d88183c3-8274-4fc3-85b8-5f1f4e15a77b\") " Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.047352 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-utilities" (OuterVolumeSpecName: "utilities") pod "d88183c3-8274-4fc3-85b8-5f1f4e15a77b" (UID: "d88183c3-8274-4fc3-85b8-5f1f4e15a77b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.053367 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-kube-api-access-52gjp" (OuterVolumeSpecName: "kube-api-access-52gjp") pod "d88183c3-8274-4fc3-85b8-5f1f4e15a77b" (UID: "d88183c3-8274-4fc3-85b8-5f1f4e15a77b"). InnerVolumeSpecName "kube-api-access-52gjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.114881 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d88183c3-8274-4fc3-85b8-5f1f4e15a77b" (UID: "d88183c3-8274-4fc3-85b8-5f1f4e15a77b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.148628 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.148657 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52gjp\" (UniqueName: \"kubernetes.io/projected/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-kube-api-access-52gjp\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.148667 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88183c3-8274-4fc3-85b8-5f1f4e15a77b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:18 crc kubenswrapper[4760]: I1227 05:59:18.976680 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjlcr" Dec 27 05:59:19 crc kubenswrapper[4760]: I1227 05:59:19.009942 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjlcr"] Dec 27 05:59:19 crc kubenswrapper[4760]: I1227 05:59:19.017697 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sjlcr"] Dec 27 05:59:19 crc kubenswrapper[4760]: I1227 05:59:19.509158 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" path="/var/lib/kubelet/pods/d88183c3-8274-4fc3-85b8-5f1f4e15a77b/volumes" Dec 27 05:59:24 crc kubenswrapper[4760]: I1227 05:59:24.004069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" event={"ID":"a365d3ec-e794-4f15-9e95-1c502f25d860","Type":"ContainerStarted","Data":"d4aa886fc42b2f8090744da195241b36694bea2243db9d558710ed55d30eda89"} Dec 27 05:59:24 crc kubenswrapper[4760]: I1227 05:59:24.004669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:24 crc kubenswrapper[4760]: I1227 05:59:24.006843 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" event={"ID":"cc9fca43-72d1-4ee1-b32e-dda6d593659d","Type":"ContainerStarted","Data":"278af452dd53ed95a079976baa312d4307aa88d840abf7cdf9875ae4055f4bc1"} Dec 27 05:59:24 crc kubenswrapper[4760]: I1227 05:59:24.006988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:24 crc kubenswrapper[4760]: I1227 05:59:24.021124 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" podStartSLOduration=1.260700254 podStartE2EDuration="7.021107353s" podCreationTimestamp="2025-12-27 05:59:17 +0000 UTC" firstStartedPulling="2025-12-27 05:59:17.83929997 +0000 UTC m=+880.599369285" lastFinishedPulling="2025-12-27 05:59:23.599707069 +0000 UTC m=+886.359776384" observedRunningTime="2025-12-27 05:59:24.019036973 +0000 UTC m=+886.779106298" watchObservedRunningTime="2025-12-27 05:59:24.021107353 +0000 UTC m=+886.781176658" Dec 27 05:59:24 crc kubenswrapper[4760]: I1227 05:59:24.040647 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" podStartSLOduration=2.186668008 podStartE2EDuration="8.040627162s" podCreationTimestamp="2025-12-27 05:59:16 +0000 UTC" firstStartedPulling="2025-12-27 05:59:17.682792072 +0000 UTC m=+880.442861387" lastFinishedPulling="2025-12-27 05:59:23.536751216 +0000 UTC m=+886.296820541" observedRunningTime="2025-12-27 05:59:24.036503921 +0000 UTC m=+886.796573256" watchObservedRunningTime="2025-12-27 05:59:24.040627162 +0000 UTC m=+886.800696477" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.646780 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-52vdm"] Dec 27 05:59:32 crc kubenswrapper[4760]: E1227 05:59:32.647469 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="extract-utilities" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.647484 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="extract-utilities" Dec 27 05:59:32 crc kubenswrapper[4760]: E1227 05:59:32.647512 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="registry-server" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.647536 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="registry-server" Dec 27 05:59:32 crc kubenswrapper[4760]: E1227 05:59:32.647550 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="extract-content" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.647558 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="extract-content" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.647672 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88183c3-8274-4fc3-85b8-5f1f4e15a77b" containerName="registry-server" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.648754 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.661884 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52vdm"] Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.748326 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-utilities\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.748411 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-catalog-content\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.748437 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwr8\" (UniqueName: \"kubernetes.io/projected/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-kube-api-access-9rwr8\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.850060 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-utilities\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.850162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwr8\" (UniqueName: \"kubernetes.io/projected/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-kube-api-access-9rwr8\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.850184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-catalog-content\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.850558 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-utilities\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.850605 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-catalog-content\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.871015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwr8\" (UniqueName: \"kubernetes.io/projected/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-kube-api-access-9rwr8\") pod \"redhat-marketplace-52vdm\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:32 crc kubenswrapper[4760]: I1227 05:59:32.971424 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:33 crc kubenswrapper[4760]: I1227 05:59:33.409499 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52vdm"] Dec 27 05:59:34 crc kubenswrapper[4760]: I1227 05:59:34.073671 4760 generic.go:334] "Generic (PLEG): container finished" podID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerID="463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9" exitCode=0 Dec 27 05:59:34 crc kubenswrapper[4760]: I1227 05:59:34.073738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerDied","Data":"463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9"} Dec 27 05:59:34 crc kubenswrapper[4760]: I1227 05:59:34.073772 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerStarted","Data":"2ab4dd6f03b37ee9f2e4e4e2c54979059da665f9f71d4f7ff0a9ecc66e2ebca8"} Dec 27 05:59:37 crc kubenswrapper[4760]: I1227 05:59:37.099992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerStarted","Data":"ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61"} Dec 27 05:59:37 crc kubenswrapper[4760]: I1227 05:59:37.550423 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-797b4b66d5-rcqt6" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.115372 4760 generic.go:334] "Generic (PLEG): container finished" podID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerID="ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61" exitCode=0 Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.115422 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerDied","Data":"ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61"} Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.633460 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lt97z"] Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.635401 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.645855 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lt97z"] Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.728846 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-utilities\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.728931 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgsfd\" (UniqueName: \"kubernetes.io/projected/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-kube-api-access-lgsfd\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.728959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-catalog-content\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.830081 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-utilities\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.830159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgsfd\" (UniqueName: \"kubernetes.io/projected/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-kube-api-access-lgsfd\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.830182 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-catalog-content\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.830582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-catalog-content\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.830641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-utilities\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.856047 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgsfd\" (UniqueName: \"kubernetes.io/projected/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-kube-api-access-lgsfd\") pod \"certified-operators-lt97z\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:38 crc kubenswrapper[4760]: I1227 05:59:38.972857 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:39 crc kubenswrapper[4760]: I1227 05:59:39.140775 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerStarted","Data":"2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d"} Dec 27 05:59:39 crc kubenswrapper[4760]: I1227 05:59:39.167438 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-52vdm" podStartSLOduration=2.7080764950000003 podStartE2EDuration="7.167421099s" podCreationTimestamp="2025-12-27 05:59:32 +0000 UTC" firstStartedPulling="2025-12-27 05:59:34.076021786 +0000 UTC m=+896.836091111" lastFinishedPulling="2025-12-27 05:59:38.5353664 +0000 UTC m=+901.295435715" observedRunningTime="2025-12-27 05:59:39.166690901 +0000 UTC m=+901.926760216" watchObservedRunningTime="2025-12-27 05:59:39.167421099 +0000 UTC m=+901.927490414" Dec 27 05:59:39 crc kubenswrapper[4760]: I1227 05:59:39.481419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lt97z"] Dec 27 05:59:40 crc kubenswrapper[4760]: I1227 05:59:40.152176 4760 generic.go:334] "Generic (PLEG): container finished" podID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerID="2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b" exitCode=0 Dec 27 05:59:40 crc kubenswrapper[4760]: I1227 05:59:40.152315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerDied","Data":"2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b"} Dec 27 05:59:40 crc kubenswrapper[4760]: I1227 05:59:40.152634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerStarted","Data":"e9dc95be8bedeacaa4374ff964ad4d96977181e431f9c2b21da95d00b5806a15"} Dec 27 05:59:42 crc kubenswrapper[4760]: I1227 05:59:42.972318 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:42 crc kubenswrapper[4760]: I1227 05:59:42.972725 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:43 crc kubenswrapper[4760]: I1227 05:59:43.023564 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:43 crc kubenswrapper[4760]: I1227 05:59:43.210957 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:43 crc kubenswrapper[4760]: I1227 05:59:43.801644 4760 scope.go:117] "RemoveContainer" containerID="c42ee81510684501378f16244618a6d09858b67b4c62da198b049fe25d255c4e" Dec 27 05:59:43 crc kubenswrapper[4760]: I1227 05:59:43.830414 4760 scope.go:117] "RemoveContainer" containerID="4732e962a6c9ea98e2cd1480a7c6ae2c776c620700221dd31d4a07b36f0ab8c9" Dec 27 05:59:43 crc kubenswrapper[4760]: I1227 05:59:43.856221 4760 scope.go:117] "RemoveContainer" containerID="6b5457ff0d3ee06cad60b58f58febf3bf4efdf84f6f5149ab7ce19466bbccbe7" Dec 27 05:59:46 crc kubenswrapper[4760]: I1227 05:59:46.190181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerStarted","Data":"9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9"} Dec 27 05:59:46 crc kubenswrapper[4760]: I1227 05:59:46.827430 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52vdm"] Dec 27 05:59:46 crc kubenswrapper[4760]: I1227 05:59:46.827798 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-52vdm" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="registry-server" containerID="cri-o://2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d" gracePeriod=2 Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.186389 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.197580 4760 generic.go:334] "Generic (PLEG): container finished" podID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerID="2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d" exitCode=0 Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.197666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerDied","Data":"2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d"} Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.197698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52vdm" event={"ID":"fa3d276e-527e-4ef9-aeb4-ae6554cfd792","Type":"ContainerDied","Data":"2ab4dd6f03b37ee9f2e4e4e2c54979059da665f9f71d4f7ff0a9ecc66e2ebca8"} Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.197719 4760 scope.go:117] "RemoveContainer" containerID="2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.197873 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52vdm" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.206199 4760 generic.go:334] "Generic (PLEG): container finished" podID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerID="9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9" exitCode=0 Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.206251 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerDied","Data":"9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9"} Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.223259 4760 scope.go:117] "RemoveContainer" containerID="ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.239835 4760 scope.go:117] "RemoveContainer" containerID="463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.270060 4760 scope.go:117] "RemoveContainer" containerID="2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d" Dec 27 05:59:47 crc kubenswrapper[4760]: E1227 05:59:47.270603 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d\": container with ID starting with 2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d not found: ID does not exist" containerID="2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.270650 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d"} err="failed to get container status \"2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d\": rpc error: code = NotFound desc = could not find container \"2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d\": container with ID starting with 2db3f345ab2f696c5eca6b574873e2b7d23ec12921e3ce2d86b24154ab53e01d not found: ID does not exist" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.270680 4760 scope.go:117] "RemoveContainer" containerID="ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61" Dec 27 05:59:47 crc kubenswrapper[4760]: E1227 05:59:47.270971 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61\": container with ID starting with ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61 not found: ID does not exist" containerID="ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.271002 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61"} err="failed to get container status \"ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61\": rpc error: code = NotFound desc = could not find container \"ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61\": container with ID starting with ec2ed0e75b3872736858a01022fb0a0cc1027ccdb4aff37f1d71ad089bc99e61 not found: ID does not exist" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.271027 4760 scope.go:117] "RemoveContainer" containerID="463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9" Dec 27 05:59:47 crc kubenswrapper[4760]: E1227 05:59:47.272231 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9\": container with ID starting with 463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9 not found: ID does not exist" containerID="463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.272254 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9"} err="failed to get container status \"463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9\": rpc error: code = NotFound desc = could not find container \"463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9\": container with ID starting with 463f9cda58ffa08f545291e98e012ae6083291b35cc85f8dab3825b84a35a4b9 not found: ID does not exist" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.289375 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-catalog-content\") pod \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.289446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-utilities\") pod \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.289489 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwr8\" (UniqueName: \"kubernetes.io/projected/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-kube-api-access-9rwr8\") pod \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\" (UID: \"fa3d276e-527e-4ef9-aeb4-ae6554cfd792\") " Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.291983 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-utilities" (OuterVolumeSpecName: "utilities") pod "fa3d276e-527e-4ef9-aeb4-ae6554cfd792" (UID: "fa3d276e-527e-4ef9-aeb4-ae6554cfd792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.296745 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-kube-api-access-9rwr8" (OuterVolumeSpecName: "kube-api-access-9rwr8") pod "fa3d276e-527e-4ef9-aeb4-ae6554cfd792" (UID: "fa3d276e-527e-4ef9-aeb4-ae6554cfd792"). InnerVolumeSpecName "kube-api-access-9rwr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.312737 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa3d276e-527e-4ef9-aeb4-ae6554cfd792" (UID: "fa3d276e-527e-4ef9-aeb4-ae6554cfd792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.390895 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.390946 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.390965 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rwr8\" (UniqueName: \"kubernetes.io/projected/fa3d276e-527e-4ef9-aeb4-ae6554cfd792-kube-api-access-9rwr8\") on node \"crc\" DevicePath \"\"" Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.540725 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52vdm"] Dec 27 05:59:47 crc kubenswrapper[4760]: I1227 05:59:47.545568 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-52vdm"] Dec 27 05:59:48 crc kubenswrapper[4760]: I1227 05:59:48.217212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerStarted","Data":"2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede"} Dec 27 05:59:48 crc kubenswrapper[4760]: I1227 05:59:48.238106 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lt97z" podStartSLOduration=2.776676872 podStartE2EDuration="10.238070588s" podCreationTimestamp="2025-12-27 05:59:38 +0000 UTC" firstStartedPulling="2025-12-27 05:59:40.154631488 +0000 UTC m=+902.914700833" lastFinishedPulling="2025-12-27 05:59:47.616025244 +0000 UTC m=+910.376094549" observedRunningTime="2025-12-27 05:59:48.232534839 +0000 UTC m=+910.992604164" watchObservedRunningTime="2025-12-27 05:59:48.238070588 +0000 UTC m=+910.998139913" Dec 27 05:59:48 crc kubenswrapper[4760]: I1227 05:59:48.973444 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:48 crc kubenswrapper[4760]: I1227 05:59:48.973508 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:49 crc kubenswrapper[4760]: I1227 05:59:49.511301 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" path="/var/lib/kubelet/pods/fa3d276e-527e-4ef9-aeb4-ae6554cfd792/volumes" Dec 27 05:59:50 crc kubenswrapper[4760]: I1227 05:59:50.028030 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lt97z" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="registry-server" probeResult="failure" output=< Dec 27 05:59:50 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Dec 27 05:59:50 crc kubenswrapper[4760]: > Dec 27 05:59:57 crc kubenswrapper[4760]: I1227 05:59:57.177251 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5657944b46-gqznd" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.017946 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj"] Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.018478 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="extract-content" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.018521 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="extract-content" Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.018574 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="extract-utilities" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.018602 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="extract-utilities" Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.018647 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="registry-server" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.018667 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="registry-server" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.018982 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3d276e-527e-4ef9-aeb4-ae6554cfd792" containerName="registry-server" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.019934 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.024338 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sjf4c"] Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.027775 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.030174 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ttl2n" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.031331 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.032058 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.032319 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.041894 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj"] Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.129012 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fj4vj"] Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130037 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-conf\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4aff88c-89b8-47dd-a535-c806e81073ff-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dmlkj\" (UID: \"b4aff88c-89b8-47dd-a535-c806e81073ff\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130897 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mnx\" (UniqueName: \"kubernetes.io/projected/b4aff88c-89b8-47dd-a535-c806e81073ff-kube-api-access-d9mnx\") pod \"frr-k8s-webhook-server-7784b6fcf-dmlkj\" (UID: \"b4aff88c-89b8-47dd-a535-c806e81073ff\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130921 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7b6f\" (UniqueName: \"kubernetes.io/projected/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-kube-api-access-f7b6f\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-sockets\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130967 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-startup\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.130984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-reloader\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.131057 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics-certs\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.132034 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.133478 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.133639 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sgcpb" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.133866 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.165569 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-sjdjf"] Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.166498 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.168512 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.195451 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-sjdjf"] Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics-certs\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231845 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-conf\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4aff88c-89b8-47dd-a535-c806e81073ff-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dmlkj\" (UID: \"b4aff88c-89b8-47dd-a535-c806e81073ff\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231891 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mnx\" (UniqueName: \"kubernetes.io/projected/b4aff88c-89b8-47dd-a535-c806e81073ff-kube-api-access-d9mnx\") pod \"frr-k8s-webhook-server-7784b6fcf-dmlkj\" (UID: \"b4aff88c-89b8-47dd-a535-c806e81073ff\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7b6f\" (UniqueName: \"kubernetes.io/projected/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-kube-api-access-f7b6f\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-sockets\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231971 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9d7a04b-5e48-421d-8f15-567afba65f65-cert\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.231968 4760 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.231998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-startup\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.232143 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics-certs podName:6c6dbd13-bfc2-4fde-9787-c1c4b2793736 nodeName:}" failed. No retries permitted until 2025-12-27 05:59:58.732124969 +0000 UTC m=+921.492194274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics-certs") pod "frr-k8s-sjf4c" (UID: "6c6dbd13-bfc2-4fde-9787-c1c4b2793736") : secret "frr-k8s-certs-secret" not found Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d4d0e743-0bef-49a7-9e0c-15c2e212944a-metallb-excludel2\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-reloader\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232208 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thm4l\" (UniqueName: \"kubernetes.io/projected/d4d0e743-0bef-49a7-9e0c-15c2e212944a-kube-api-access-thm4l\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232290 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-metrics-certs\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232416 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grs7g\" (UniqueName: \"kubernetes.io/projected/a9d7a04b-5e48-421d-8f15-567afba65f65-kube-api-access-grs7g\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-sockets\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9d7a04b-5e48-421d-8f15-567afba65f65-metrics-certs\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-reloader\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-conf\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.232953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-frr-startup\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.236827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4aff88c-89b8-47dd-a535-c806e81073ff-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dmlkj\" (UID: \"b4aff88c-89b8-47dd-a535-c806e81073ff\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.263767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mnx\" (UniqueName: \"kubernetes.io/projected/b4aff88c-89b8-47dd-a535-c806e81073ff-kube-api-access-d9mnx\") pod \"frr-k8s-webhook-server-7784b6fcf-dmlkj\" (UID: \"b4aff88c-89b8-47dd-a535-c806e81073ff\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.297726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7b6f\" (UniqueName: \"kubernetes.io/projected/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-kube-api-access-f7b6f\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.333952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d4d0e743-0bef-49a7-9e0c-15c2e212944a-metallb-excludel2\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thm4l\" (UniqueName: \"kubernetes.io/projected/d4d0e743-0bef-49a7-9e0c-15c2e212944a-kube-api-access-thm4l\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334124 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-metrics-certs\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grs7g\" (UniqueName: \"kubernetes.io/projected/a9d7a04b-5e48-421d-8f15-567afba65f65-kube-api-access-grs7g\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9d7a04b-5e48-421d-8f15-567afba65f65-metrics-certs\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334256 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9d7a04b-5e48-421d-8f15-567afba65f65-cert\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.334783 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.334833 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist podName:d4d0e743-0bef-49a7-9e0c-15c2e212944a nodeName:}" failed. No retries permitted until 2025-12-27 05:59:58.834819667 +0000 UTC m=+921.594888982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist") pod "speaker-fj4vj" (UID: "d4d0e743-0bef-49a7-9e0c-15c2e212944a") : secret "metallb-memberlist" not found Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.334780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d4d0e743-0bef-49a7-9e0c-15c2e212944a-metallb-excludel2\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.337629 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9d7a04b-5e48-421d-8f15-567afba65f65-metrics-certs\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.338066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.338653 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-metrics-certs\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.339976 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.349492 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9d7a04b-5e48-421d-8f15-567afba65f65-cert\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.353608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thm4l\" (UniqueName: \"kubernetes.io/projected/d4d0e743-0bef-49a7-9e0c-15c2e212944a-kube-api-access-thm4l\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.356529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grs7g\" (UniqueName: \"kubernetes.io/projected/a9d7a04b-5e48-421d-8f15-567afba65f65-kube-api-access-grs7g\") pod \"controller-5bddd4b946-sjdjf\" (UID: \"a9d7a04b-5e48-421d-8f15-567afba65f65\") " pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.479765 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.739218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics-certs\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.743214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c6dbd13-bfc2-4fde-9787-c1c4b2793736-metrics-certs\") pod \"frr-k8s-sjf4c\" (UID: \"6c6dbd13-bfc2-4fde-9787-c1c4b2793736\") " pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.752909 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj"] Dec 27 05:59:58 crc kubenswrapper[4760]: W1227 05:59:58.759816 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4aff88c_89b8_47dd_a535_c806e81073ff.slice/crio-7f3d00a557f43fc75c6ca8aa7244d9313564aadfe43e00b223c083c625fe2307 WatchSource:0}: Error finding container 7f3d00a557f43fc75c6ca8aa7244d9313564aadfe43e00b223c083c625fe2307: Status 404 returned error can't find the container with id 7f3d00a557f43fc75c6ca8aa7244d9313564aadfe43e00b223c083c625fe2307 Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.881630 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.881884 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 27 05:59:58 crc kubenswrapper[4760]: E1227 05:59:58.881976 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist podName:d4d0e743-0bef-49a7-9e0c-15c2e212944a nodeName:}" failed. No retries permitted until 2025-12-27 05:59:59.881950997 +0000 UTC m=+922.642020342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist") pod "speaker-fj4vj" (UID: "d4d0e743-0bef-49a7-9e0c-15c2e212944a") : secret "metallb-memberlist" not found Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.925854 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-sjdjf"] Dec 27 05:59:58 crc kubenswrapper[4760]: W1227 05:59:58.931045 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d7a04b_5e48_421d_8f15_567afba65f65.slice/crio-428bce9a6013cf1821d7a3331c8ac4f2a3dfb640904aec3ca5911bc07ac0b256 WatchSource:0}: Error finding container 428bce9a6013cf1821d7a3331c8ac4f2a3dfb640904aec3ca5911bc07ac0b256: Status 404 returned error can't find the container with id 428bce9a6013cf1821d7a3331c8ac4f2a3dfb640904aec3ca5911bc07ac0b256 Dec 27 05:59:58 crc kubenswrapper[4760]: I1227 05:59:58.948511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sjf4c" Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.031468 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.079798 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.315536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" event={"ID":"b4aff88c-89b8-47dd-a535-c806e81073ff","Type":"ContainerStarted","Data":"7f3d00a557f43fc75c6ca8aa7244d9313564aadfe43e00b223c083c625fe2307"} Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.317953 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-sjdjf" event={"ID":"a9d7a04b-5e48-421d-8f15-567afba65f65","Type":"ContainerStarted","Data":"428bce9a6013cf1821d7a3331c8ac4f2a3dfb640904aec3ca5911bc07ac0b256"} Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.894931 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.900787 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d4d0e743-0bef-49a7-9e0c-15c2e212944a-memberlist\") pod \"speaker-fj4vj\" (UID: \"d4d0e743-0bef-49a7-9e0c-15c2e212944a\") " pod="metallb-system/speaker-fj4vj" Dec 27 05:59:59 crc kubenswrapper[4760]: I1227 05:59:59.942953 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fj4vj" Dec 27 05:59:59 crc kubenswrapper[4760]: W1227 05:59:59.971992 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d0e743_0bef_49a7_9e0c_15c2e212944a.slice/crio-a83f5e3afea56cb39c3645f2dd797aae97ebb0a8f058e27d242a6e4e3ee585a2 WatchSource:0}: Error finding container a83f5e3afea56cb39c3645f2dd797aae97ebb0a8f058e27d242a6e4e3ee585a2: Status 404 returned error can't find the container with id a83f5e3afea56cb39c3645f2dd797aae97ebb0a8f058e27d242a6e4e3ee585a2 Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.143668 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2"] Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.144493 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.146572 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.146767 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.171443 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2"] Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.299588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-config-volume\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.299661 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-secret-volume\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.299716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4rr\" (UniqueName: \"kubernetes.io/projected/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-kube-api-access-bl4rr\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.369360 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"0cdccd5a9a6b009218ea8a2a5b61da68ea39be487fc3a7af269238fd01427255"} Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.385082 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-sjdjf" event={"ID":"a9d7a04b-5e48-421d-8f15-567afba65f65","Type":"ContainerStarted","Data":"d566b9904b794399bf4e126346827b1b0f2e61eaa1bafe3114f2bf44c77f947a"} Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.385153 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-sjdjf" event={"ID":"a9d7a04b-5e48-421d-8f15-567afba65f65","Type":"ContainerStarted","Data":"5b9d75726fd96e08fe7be520f6914fa78783e0c1ff7f2fdd0cb10d4492bc013f"} Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.386074 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.398333 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fj4vj" event={"ID":"d4d0e743-0bef-49a7-9e0c-15c2e212944a","Type":"ContainerStarted","Data":"7e1a0753b191d05f779eeb0087150e486ed9225ea1c8651ccf4caf559e8eddf0"} Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.398382 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fj4vj" event={"ID":"d4d0e743-0bef-49a7-9e0c-15c2e212944a","Type":"ContainerStarted","Data":"a83f5e3afea56cb39c3645f2dd797aae97ebb0a8f058e27d242a6e4e3ee585a2"} Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.402832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-config-volume\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.402908 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-secret-volume\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.402951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl4rr\" (UniqueName: \"kubernetes.io/projected/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-kube-api-access-bl4rr\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.404383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-config-volume\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.409773 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-secret-volume\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.465794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl4rr\" (UniqueName: \"kubernetes.io/projected/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-kube-api-access-bl4rr\") pod \"collect-profiles-29446920-jk2h2\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.466066 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.802024 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-sjdjf" podStartSLOduration=2.802007929 podStartE2EDuration="2.802007929s" podCreationTimestamp="2025-12-27 05:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:00:00.418816427 +0000 UTC m=+923.178885742" watchObservedRunningTime="2025-12-27 06:00:00.802007929 +0000 UTC m=+923.562077244" Dec 27 06:00:00 crc kubenswrapper[4760]: I1227 06:00:00.805523 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2"] Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.406395 4760 generic.go:334] "Generic (PLEG): container finished" podID="95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" containerID="2bd4aca2e3e080c3cb8b89a03489c4e9e1da4c18fe34b61e7cba12f043d9e7cf" exitCode=0 Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.406552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" event={"ID":"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd","Type":"ContainerDied","Data":"2bd4aca2e3e080c3cb8b89a03489c4e9e1da4c18fe34b61e7cba12f043d9e7cf"} Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.407258 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" event={"ID":"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd","Type":"ContainerStarted","Data":"80fc9eff45d949716113cc43a37b2a81327d7b1ac203a2d0bf17361569bf94e4"} Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.413444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fj4vj" event={"ID":"d4d0e743-0bef-49a7-9e0c-15c2e212944a","Type":"ContainerStarted","Data":"65d96a8c579453a52d069114264444030287dc9279f11f6b531972b4162fb820"} Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.413480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fj4vj" Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.429082 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lt97z"] Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.429354 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lt97z" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="registry-server" containerID="cri-o://2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede" gracePeriod=2 Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.445778 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fj4vj" podStartSLOduration=3.445753836 podStartE2EDuration="3.445753836s" podCreationTimestamp="2025-12-27 05:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:00:01.441783897 +0000 UTC m=+924.201853212" watchObservedRunningTime="2025-12-27 06:00:01.445753836 +0000 UTC m=+924.205823151" Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.838573 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.925540 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-utilities\") pod \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.925607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgsfd\" (UniqueName: \"kubernetes.io/projected/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-kube-api-access-lgsfd\") pod \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.926413 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-utilities" (OuterVolumeSpecName: "utilities") pod "508b7ccd-bda5-495a-991f-2f0dcfc52cd3" (UID: "508b7ccd-bda5-495a-991f-2f0dcfc52cd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.929389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-catalog-content\") pod \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\" (UID: \"508b7ccd-bda5-495a-991f-2f0dcfc52cd3\") " Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.929762 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.930448 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-kube-api-access-lgsfd" (OuterVolumeSpecName: "kube-api-access-lgsfd") pod "508b7ccd-bda5-495a-991f-2f0dcfc52cd3" (UID: "508b7ccd-bda5-495a-991f-2f0dcfc52cd3"). InnerVolumeSpecName "kube-api-access-lgsfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:00:01 crc kubenswrapper[4760]: I1227 06:00:01.971678 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "508b7ccd-bda5-495a-991f-2f0dcfc52cd3" (UID: "508b7ccd-bda5-495a-991f-2f0dcfc52cd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.030865 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.030901 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgsfd\" (UniqueName: \"kubernetes.io/projected/508b7ccd-bda5-495a-991f-2f0dcfc52cd3-kube-api-access-lgsfd\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.442496 4760 generic.go:334] "Generic (PLEG): container finished" podID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerID="2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede" exitCode=0 Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.442560 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lt97z" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.442553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerDied","Data":"2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede"} Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.442627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lt97z" event={"ID":"508b7ccd-bda5-495a-991f-2f0dcfc52cd3","Type":"ContainerDied","Data":"e9dc95be8bedeacaa4374ff964ad4d96977181e431f9c2b21da95d00b5806a15"} Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.442653 4760 scope.go:117] "RemoveContainer" containerID="2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.464655 4760 scope.go:117] "RemoveContainer" containerID="9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.473608 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lt97z"] Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.481705 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lt97z"] Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.493052 4760 scope.go:117] "RemoveContainer" containerID="2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.530070 4760 scope.go:117] "RemoveContainer" containerID="2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede" Dec 27 06:00:02 crc kubenswrapper[4760]: E1227 06:00:02.531228 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede\": container with ID starting with 2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede not found: ID does not exist" containerID="2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.531280 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede"} err="failed to get container status \"2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede\": rpc error: code = NotFound desc = could not find container \"2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede\": container with ID starting with 2975bdc2fd27440cb3559dfd25dc68875de14f6eadfddd9fc23a17d363feeede not found: ID does not exist" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.531312 4760 scope.go:117] "RemoveContainer" containerID="9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9" Dec 27 06:00:02 crc kubenswrapper[4760]: E1227 06:00:02.531726 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9\": container with ID starting with 9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9 not found: ID does not exist" containerID="9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.531764 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9"} err="failed to get container status \"9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9\": rpc error: code = NotFound desc = could not find container \"9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9\": container with ID starting with 9bc4ee1e409c9acbee9e0daa870234edc7f925cdee479c38e687c2bf79aff4f9 not found: ID does not exist" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.531794 4760 scope.go:117] "RemoveContainer" containerID="2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b" Dec 27 06:00:02 crc kubenswrapper[4760]: E1227 06:00:02.532291 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b\": container with ID starting with 2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b not found: ID does not exist" containerID="2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.532338 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b"} err="failed to get container status \"2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b\": rpc error: code = NotFound desc = could not find container \"2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b\": container with ID starting with 2d13bcf3c8a2fe3e5e6908672be8e38269350587323fa39eb2546e82bf49157b not found: ID does not exist" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.766350 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.941975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl4rr\" (UniqueName: \"kubernetes.io/projected/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-kube-api-access-bl4rr\") pod \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.942331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-secret-volume\") pod \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.942385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-config-volume\") pod \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\" (UID: \"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd\") " Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.943229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-config-volume" (OuterVolumeSpecName: "config-volume") pod "95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" (UID: "95c3d786-3ff6-4cbf-a2ae-4c588e6155fd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.947568 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-kube-api-access-bl4rr" (OuterVolumeSpecName: "kube-api-access-bl4rr") pod "95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" (UID: "95c3d786-3ff6-4cbf-a2ae-4c588e6155fd"). InnerVolumeSpecName "kube-api-access-bl4rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:00:02 crc kubenswrapper[4760]: I1227 06:00:02.947586 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" (UID: "95c3d786-3ff6-4cbf-a2ae-4c588e6155fd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.043900 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.043937 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-config-volume\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.043947 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl4rr\" (UniqueName: \"kubernetes.io/projected/95c3d786-3ff6-4cbf-a2ae-4c588e6155fd-kube-api-access-bl4rr\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.453600 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" event={"ID":"95c3d786-3ff6-4cbf-a2ae-4c588e6155fd","Type":"ContainerDied","Data":"80fc9eff45d949716113cc43a37b2a81327d7b1ac203a2d0bf17361569bf94e4"} Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.453643 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80fc9eff45d949716113cc43a37b2a81327d7b1ac203a2d0bf17361569bf94e4" Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.453706 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446920-jk2h2" Dec 27 06:00:03 crc kubenswrapper[4760]: I1227 06:00:03.512511 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" path="/var/lib/kubelet/pods/508b7ccd-bda5-495a-991f-2f0dcfc52cd3/volumes" Dec 27 06:00:07 crc kubenswrapper[4760]: I1227 06:00:07.495753 4760 generic.go:334] "Generic (PLEG): container finished" podID="6c6dbd13-bfc2-4fde-9787-c1c4b2793736" containerID="59e49dc223cba50e1785b55a507ebe83c342e7ea8f5989508364c1a100fc2dfc" exitCode=0 Dec 27 06:00:07 crc kubenswrapper[4760]: I1227 06:00:07.495834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerDied","Data":"59e49dc223cba50e1785b55a507ebe83c342e7ea8f5989508364c1a100fc2dfc"} Dec 27 06:00:07 crc kubenswrapper[4760]: I1227 06:00:07.498512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" event={"ID":"b4aff88c-89b8-47dd-a535-c806e81073ff","Type":"ContainerStarted","Data":"b38effe3b1daaa42f1f6bf363173f8126b024884c82cbb000cedc85a26b41eb1"} Dec 27 06:00:07 crc kubenswrapper[4760]: I1227 06:00:07.498764 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 06:00:07 crc kubenswrapper[4760]: I1227 06:00:07.553253 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" podStartSLOduration=2.586576038 podStartE2EDuration="10.553235814s" podCreationTimestamp="2025-12-27 05:59:57 +0000 UTC" firstStartedPulling="2025-12-27 05:59:58.761683801 +0000 UTC m=+921.521753126" lastFinishedPulling="2025-12-27 06:00:06.728343557 +0000 UTC m=+929.488412902" observedRunningTime="2025-12-27 06:00:07.548646939 +0000 UTC m=+930.308716264" watchObservedRunningTime="2025-12-27 06:00:07.553235814 +0000 UTC m=+930.313305129" Dec 27 06:00:08 crc kubenswrapper[4760]: I1227 06:00:08.509082 4760 generic.go:334] "Generic (PLEG): container finished" podID="6c6dbd13-bfc2-4fde-9787-c1c4b2793736" containerID="f0a781dce0d02263c5d2a059a3841caa226ce41b3268a50dd38787db0288e7d5" exitCode=0 Dec 27 06:00:08 crc kubenswrapper[4760]: I1227 06:00:08.509149 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerDied","Data":"f0a781dce0d02263c5d2a059a3841caa226ce41b3268a50dd38787db0288e7d5"} Dec 27 06:00:09 crc kubenswrapper[4760]: I1227 06:00:09.522962 4760 generic.go:334] "Generic (PLEG): container finished" podID="6c6dbd13-bfc2-4fde-9787-c1c4b2793736" containerID="18918bf1ef15496bf4029e5a1af20800fd0216055f2de11271fe88609e0e3ba3" exitCode=0 Dec 27 06:00:09 crc kubenswrapper[4760]: I1227 06:00:09.523076 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerDied","Data":"18918bf1ef15496bf4029e5a1af20800fd0216055f2de11271fe88609e0e3ba3"} Dec 27 06:00:10 crc kubenswrapper[4760]: I1227 06:00:10.535044 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"25ebbac4a9ba52f5637dce17575992e63c3f979ccc4742a1ef29ea1081044ccc"} Dec 27 06:00:10 crc kubenswrapper[4760]: I1227 06:00:10.535408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"d607c3e130468fcf4324a0d30a4343868b894ef9bd67615694d1c3484df5e087"} Dec 27 06:00:10 crc kubenswrapper[4760]: I1227 06:00:10.535419 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"1b8253b2cbaa60ccfbf9965e0c4ff4462868685e13e9219044640779d82bcb0d"} Dec 27 06:00:10 crc kubenswrapper[4760]: I1227 06:00:10.535429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"3ff15302ca98c36fa76261dd67cad18e066f39094ec928bd04a1c33045705b29"} Dec 27 06:00:10 crc kubenswrapper[4760]: I1227 06:00:10.535438 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"ff8bc4a829c71d572e683c5410aa78b316f2ba05c911f201acfee372df9e00af"} Dec 27 06:00:11 crc kubenswrapper[4760]: I1227 06:00:11.545720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sjf4c" event={"ID":"6c6dbd13-bfc2-4fde-9787-c1c4b2793736","Type":"ContainerStarted","Data":"ac0e7081ea7ff219e20502b68cc5bcb2558450382eec91a77a707d94ae7e8c8f"} Dec 27 06:00:11 crc kubenswrapper[4760]: I1227 06:00:11.546008 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sjf4c" Dec 27 06:00:11 crc kubenswrapper[4760]: I1227 06:00:11.573673 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sjf4c" podStartSLOduration=7.146467608 podStartE2EDuration="14.573641394s" podCreationTimestamp="2025-12-27 05:59:57 +0000 UTC" firstStartedPulling="2025-12-27 05:59:59.350209387 +0000 UTC m=+922.110278702" lastFinishedPulling="2025-12-27 06:00:06.777383133 +0000 UTC m=+929.537452488" observedRunningTime="2025-12-27 06:00:11.570982758 +0000 UTC m=+934.331052123" watchObservedRunningTime="2025-12-27 06:00:11.573641394 +0000 UTC m=+934.333710749" Dec 27 06:00:13 crc kubenswrapper[4760]: I1227 06:00:13.949006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sjf4c" Dec 27 06:00:14 crc kubenswrapper[4760]: I1227 06:00:14.005777 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sjf4c" Dec 27 06:00:18 crc kubenswrapper[4760]: I1227 06:00:18.347212 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dmlkj" Dec 27 06:00:18 crc kubenswrapper[4760]: I1227 06:00:18.484774 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-sjdjf" Dec 27 06:00:19 crc kubenswrapper[4760]: I1227 06:00:19.949339 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fj4vj" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.534611 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m"] Dec 27 06:00:21 crc kubenswrapper[4760]: E1227 06:00:21.535209 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="extract-content" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.535226 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="extract-content" Dec 27 06:00:21 crc kubenswrapper[4760]: E1227 06:00:21.535245 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" containerName="collect-profiles" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.535252 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" containerName="collect-profiles" Dec 27 06:00:21 crc kubenswrapper[4760]: E1227 06:00:21.535273 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="registry-server" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.535281 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="registry-server" Dec 27 06:00:21 crc kubenswrapper[4760]: E1227 06:00:21.535292 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="extract-utilities" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.535299 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="extract-utilities" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.535426 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="508b7ccd-bda5-495a-991f-2f0dcfc52cd3" containerName="registry-server" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.535439 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c3d786-3ff6-4cbf-a2ae-4c588e6155fd" containerName="collect-profiles" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.536435 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.541330 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m"] Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.543465 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.628146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.628436 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnnwb\" (UniqueName: \"kubernetes.io/projected/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-kube-api-access-jnnwb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.628548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.729452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.729524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnnwb\" (UniqueName: \"kubernetes.io/projected/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-kube-api-access-jnnwb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.729566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.729969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.730004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.764771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnnwb\" (UniqueName: \"kubernetes.io/projected/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-kube-api-access-jnnwb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:21 crc kubenswrapper[4760]: I1227 06:00:21.870843 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:22 crc kubenswrapper[4760]: I1227 06:00:22.152140 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m"] Dec 27 06:00:22 crc kubenswrapper[4760]: I1227 06:00:22.615723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" event={"ID":"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf","Type":"ContainerStarted","Data":"a3048ae77ebdba7fa0d676ad2482c61a60d71a95eae8561a213562d3e1992160"} Dec 27 06:00:26 crc kubenswrapper[4760]: I1227 06:00:26.641879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" event={"ID":"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf","Type":"ContainerStarted","Data":"f6bbf0cf73f470accee65c0dd538188de2599e8f4a9da8ffd34a538b68311e21"} Dec 27 06:00:27 crc kubenswrapper[4760]: I1227 06:00:27.651328 4760 generic.go:334] "Generic (PLEG): container finished" podID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerID="f6bbf0cf73f470accee65c0dd538188de2599e8f4a9da8ffd34a538b68311e21" exitCode=0 Dec 27 06:00:27 crc kubenswrapper[4760]: I1227 06:00:27.651380 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" event={"ID":"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf","Type":"ContainerDied","Data":"f6bbf0cf73f470accee65c0dd538188de2599e8f4a9da8ffd34a538b68311e21"} Dec 27 06:00:28 crc kubenswrapper[4760]: I1227 06:00:28.953399 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sjf4c" Dec 27 06:00:33 crc kubenswrapper[4760]: I1227 06:00:33.692873 4760 generic.go:334] "Generic (PLEG): container finished" podID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerID="4b426aa9c32988a99b234dd1d65ff9a0b83c1c40e955b829013e6b56cc22712c" exitCode=0 Dec 27 06:00:33 crc kubenswrapper[4760]: I1227 06:00:33.693154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" event={"ID":"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf","Type":"ContainerDied","Data":"4b426aa9c32988a99b234dd1d65ff9a0b83c1c40e955b829013e6b56cc22712c"} Dec 27 06:00:34 crc kubenswrapper[4760]: I1227 06:00:34.714665 4760 generic.go:334] "Generic (PLEG): container finished" podID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerID="a3135ae34424b8b09d832076ed02ff3d5dc4c08916c58353028496b7396eca46" exitCode=0 Dec 27 06:00:34 crc kubenswrapper[4760]: I1227 06:00:34.714748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" event={"ID":"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf","Type":"ContainerDied","Data":"a3135ae34424b8b09d832076ed02ff3d5dc4c08916c58353028496b7396eca46"} Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.011601 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.072415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnnwb\" (UniqueName: \"kubernetes.io/projected/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-kube-api-access-jnnwb\") pod \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.072544 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-bundle\") pod \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.072646 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-util\") pod \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\" (UID: \"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf\") " Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.073641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-bundle" (OuterVolumeSpecName: "bundle") pod "3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" (UID: "3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.078072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-kube-api-access-jnnwb" (OuterVolumeSpecName: "kube-api-access-jnnwb") pod "3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" (UID: "3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf"). InnerVolumeSpecName "kube-api-access-jnnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.083501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-util" (OuterVolumeSpecName: "util") pod "3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" (UID: "3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.174523 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.174560 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-util\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.174570 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnnwb\" (UniqueName: \"kubernetes.io/projected/3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf-kube-api-access-jnnwb\") on node \"crc\" DevicePath \"\"" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.728943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" event={"ID":"3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf","Type":"ContainerDied","Data":"a3048ae77ebdba7fa0d676ad2482c61a60d71a95eae8561a213562d3e1992160"} Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.728987 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3048ae77ebdba7fa0d676ad2482c61a60d71a95eae8561a213562d3e1992160" Dec 27 06:00:36 crc kubenswrapper[4760]: I1227 06:00:36.729026 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.220428 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc"] Dec 27 06:00:44 crc kubenswrapper[4760]: E1227 06:00:44.221648 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="util" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.221665 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="util" Dec 27 06:00:44 crc kubenswrapper[4760]: E1227 06:00:44.221692 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="pull" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.221701 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="pull" Dec 27 06:00:44 crc kubenswrapper[4760]: E1227 06:00:44.221715 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="extract" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.221723 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="extract" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.221844 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf" containerName="extract" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.222463 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.223959 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.224724 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.225361 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-wdzv6" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.234079 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc"] Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.313112 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cd72632-1093-49b1-9dfe-aaf4480f82d3-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7nnkc\" (UID: \"1cd72632-1093-49b1-9dfe-aaf4480f82d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.313197 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pqp\" (UniqueName: \"kubernetes.io/projected/1cd72632-1093-49b1-9dfe-aaf4480f82d3-kube-api-access-v5pqp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7nnkc\" (UID: \"1cd72632-1093-49b1-9dfe-aaf4480f82d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.414434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pqp\" (UniqueName: \"kubernetes.io/projected/1cd72632-1093-49b1-9dfe-aaf4480f82d3-kube-api-access-v5pqp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7nnkc\" (UID: \"1cd72632-1093-49b1-9dfe-aaf4480f82d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.414523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cd72632-1093-49b1-9dfe-aaf4480f82d3-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7nnkc\" (UID: \"1cd72632-1093-49b1-9dfe-aaf4480f82d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.414956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1cd72632-1093-49b1-9dfe-aaf4480f82d3-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7nnkc\" (UID: \"1cd72632-1093-49b1-9dfe-aaf4480f82d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.438954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pqp\" (UniqueName: \"kubernetes.io/projected/1cd72632-1093-49b1-9dfe-aaf4480f82d3-kube-api-access-v5pqp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7nnkc\" (UID: \"1cd72632-1093-49b1-9dfe-aaf4480f82d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.540590 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" Dec 27 06:00:44 crc kubenswrapper[4760]: I1227 06:00:44.945839 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc"] Dec 27 06:00:45 crc kubenswrapper[4760]: I1227 06:00:45.794249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" event={"ID":"1cd72632-1093-49b1-9dfe-aaf4480f82d3","Type":"ContainerStarted","Data":"398fd097e73d8fab28561c7a7a8a4731095df8c7adb2966e8af60ace60c6151b"} Dec 27 06:00:59 crc kubenswrapper[4760]: I1227 06:00:59.894239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" event={"ID":"1cd72632-1093-49b1-9dfe-aaf4480f82d3","Type":"ContainerStarted","Data":"4f1fda8ad97e0ef57428ac9585db4526b24a77f12bb9993fb5f639415dcad297"} Dec 27 06:00:59 crc kubenswrapper[4760]: I1227 06:00:59.930259 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7nnkc" podStartSLOduration=2.093297728 podStartE2EDuration="15.930235572s" podCreationTimestamp="2025-12-27 06:00:44 +0000 UTC" firstStartedPulling="2025-12-27 06:00:44.957901866 +0000 UTC m=+967.717971181" lastFinishedPulling="2025-12-27 06:00:58.79483971 +0000 UTC m=+981.554909025" observedRunningTime="2025-12-27 06:00:59.926078567 +0000 UTC m=+982.686147912" watchObservedRunningTime="2025-12-27 06:00:59.930235572 +0000 UTC m=+982.690304917" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.774038 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-vjwvm"] Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.777251 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.783330 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-vjwvm"] Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.783953 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.783998 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.784057 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dx88w" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.855473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4hw\" (UniqueName: \"kubernetes.io/projected/39315eb5-bebe-48ac-80c7-b4ea6f02c508-kube-api-access-qd4hw\") pod \"cert-manager-webhook-f4fb5df64-vjwvm\" (UID: \"39315eb5-bebe-48ac-80c7-b4ea6f02c508\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.855513 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39315eb5-bebe-48ac-80c7-b4ea6f02c508-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-vjwvm\" (UID: \"39315eb5-bebe-48ac-80c7-b4ea6f02c508\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.957075 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd4hw\" (UniqueName: \"kubernetes.io/projected/39315eb5-bebe-48ac-80c7-b4ea6f02c508-kube-api-access-qd4hw\") pod \"cert-manager-webhook-f4fb5df64-vjwvm\" (UID: \"39315eb5-bebe-48ac-80c7-b4ea6f02c508\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.957165 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39315eb5-bebe-48ac-80c7-b4ea6f02c508-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-vjwvm\" (UID: \"39315eb5-bebe-48ac-80c7-b4ea6f02c508\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.975668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd4hw\" (UniqueName: \"kubernetes.io/projected/39315eb5-bebe-48ac-80c7-b4ea6f02c508-kube-api-access-qd4hw\") pod \"cert-manager-webhook-f4fb5df64-vjwvm\" (UID: \"39315eb5-bebe-48ac-80c7-b4ea6f02c508\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:01 crc kubenswrapper[4760]: I1227 06:01:01.992415 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39315eb5-bebe-48ac-80c7-b4ea6f02c508-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-vjwvm\" (UID: \"39315eb5-bebe-48ac-80c7-b4ea6f02c508\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:02 crc kubenswrapper[4760]: I1227 06:01:02.102916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:02 crc kubenswrapper[4760]: I1227 06:01:02.403148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-vjwvm"] Dec 27 06:01:02 crc kubenswrapper[4760]: I1227 06:01:02.918486 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" event={"ID":"39315eb5-bebe-48ac-80c7-b4ea6f02c508","Type":"ContainerStarted","Data":"9fd7712611f1a8f25d440b035ceb1c5b3249db2e30d404b125121544ca172185"} Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.282411 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2"] Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.283591 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.285988 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xfdkc" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.287361 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2"] Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.290837 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.290883 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.297130 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6jc\" (UniqueName: \"kubernetes.io/projected/29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7-kube-api-access-8j6jc\") pod \"cert-manager-cainjector-855d9ccff4-8wvq2\" (UID: \"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.297179 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-8wvq2\" (UID: \"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.398128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6jc\" (UniqueName: \"kubernetes.io/projected/29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7-kube-api-access-8j6jc\") pod \"cert-manager-cainjector-855d9ccff4-8wvq2\" (UID: \"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.398179 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-8wvq2\" (UID: \"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.418026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-8wvq2\" (UID: \"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.433114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6jc\" (UniqueName: \"kubernetes.io/projected/29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7-kube-api-access-8j6jc\") pod \"cert-manager-cainjector-855d9ccff4-8wvq2\" (UID: \"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.600368 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" Dec 27 06:01:05 crc kubenswrapper[4760]: I1227 06:01:05.812467 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2"] Dec 27 06:01:09 crc kubenswrapper[4760]: I1227 06:01:09.967611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" event={"ID":"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7","Type":"ContainerStarted","Data":"208fc2ba2c335932543273f955c2c57fdd724ede600079076ceb39800ad5e2df"} Dec 27 06:01:11 crc kubenswrapper[4760]: I1227 06:01:11.992335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" event={"ID":"29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7","Type":"ContainerStarted","Data":"aa2a3cfaaa3b7fa33f5fddad78fd0834f00ba99554826580d8d692fa238ca6f9"} Dec 27 06:01:11 crc kubenswrapper[4760]: I1227 06:01:11.997530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" event={"ID":"39315eb5-bebe-48ac-80c7-b4ea6f02c508","Type":"ContainerStarted","Data":"536c69796bae46069973ceca495aa43f208ceb7982b3f49d9d78a82ea4e79955"} Dec 27 06:01:11 crc kubenswrapper[4760]: I1227 06:01:11.997668 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:12 crc kubenswrapper[4760]: I1227 06:01:12.009653 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8wvq2" podStartSLOduration=5.275608992 podStartE2EDuration="7.009639782s" podCreationTimestamp="2025-12-27 06:01:05 +0000 UTC" firstStartedPulling="2025-12-27 06:01:09.234573463 +0000 UTC m=+991.994642778" lastFinishedPulling="2025-12-27 06:01:10.968604253 +0000 UTC m=+993.728673568" observedRunningTime="2025-12-27 06:01:12.006616165 +0000 UTC m=+994.766685500" watchObservedRunningTime="2025-12-27 06:01:12.009639782 +0000 UTC m=+994.769709097" Dec 27 06:01:12 crc kubenswrapper[4760]: I1227 06:01:12.023651 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" podStartSLOduration=2.444774494 podStartE2EDuration="11.023633065s" podCreationTimestamp="2025-12-27 06:01:01 +0000 UTC" firstStartedPulling="2025-12-27 06:01:02.408364142 +0000 UTC m=+985.168433457" lastFinishedPulling="2025-12-27 06:01:10.987222713 +0000 UTC m=+993.747292028" observedRunningTime="2025-12-27 06:01:12.019368677 +0000 UTC m=+994.779438002" watchObservedRunningTime="2025-12-27 06:01:12.023633065 +0000 UTC m=+994.783702390" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.219363 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-jps6t"] Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.220847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.223434 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t2rtr" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.246070 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdk2q\" (UniqueName: \"kubernetes.io/projected/842d0d88-e62c-4967-8863-013433d2218b-kube-api-access-jdk2q\") pod \"cert-manager-86cb77c54b-jps6t\" (UID: \"842d0d88-e62c-4967-8863-013433d2218b\") " pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.246153 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/842d0d88-e62c-4967-8863-013433d2218b-bound-sa-token\") pod \"cert-manager-86cb77c54b-jps6t\" (UID: \"842d0d88-e62c-4967-8863-013433d2218b\") " pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.248708 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-jps6t"] Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.347804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdk2q\" (UniqueName: \"kubernetes.io/projected/842d0d88-e62c-4967-8863-013433d2218b-kube-api-access-jdk2q\") pod \"cert-manager-86cb77c54b-jps6t\" (UID: \"842d0d88-e62c-4967-8863-013433d2218b\") " pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.347851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/842d0d88-e62c-4967-8863-013433d2218b-bound-sa-token\") pod \"cert-manager-86cb77c54b-jps6t\" (UID: \"842d0d88-e62c-4967-8863-013433d2218b\") " pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.370828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdk2q\" (UniqueName: \"kubernetes.io/projected/842d0d88-e62c-4967-8863-013433d2218b-kube-api-access-jdk2q\") pod \"cert-manager-86cb77c54b-jps6t\" (UID: \"842d0d88-e62c-4967-8863-013433d2218b\") " pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.399006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/842d0d88-e62c-4967-8863-013433d2218b-bound-sa-token\") pod \"cert-manager-86cb77c54b-jps6t\" (UID: \"842d0d88-e62c-4967-8863-013433d2218b\") " pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.543381 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-jps6t" Dec 27 06:01:15 crc kubenswrapper[4760]: I1227 06:01:15.978778 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-jps6t"] Dec 27 06:01:16 crc kubenswrapper[4760]: I1227 06:01:16.029988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-jps6t" event={"ID":"842d0d88-e62c-4967-8863-013433d2218b","Type":"ContainerStarted","Data":"0ab6da2daa2d94d900dbfe62009cef2d79c67562485685e36fcefd0c8a1b84f3"} Dec 27 06:01:17 crc kubenswrapper[4760]: I1227 06:01:17.039472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-jps6t" event={"ID":"842d0d88-e62c-4967-8863-013433d2218b","Type":"ContainerStarted","Data":"be6539aa6deead147190bb88b43cfd95fb388e85205f282a375b83949f33ca12"} Dec 27 06:01:17 crc kubenswrapper[4760]: I1227 06:01:17.061239 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-jps6t" podStartSLOduration=2.061216799 podStartE2EDuration="2.061216799s" podCreationTimestamp="2025-12-27 06:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:01:17.05651211 +0000 UTC m=+999.816581455" watchObservedRunningTime="2025-12-27 06:01:17.061216799 +0000 UTC m=+999.821286124" Dec 27 06:01:17 crc kubenswrapper[4760]: I1227 06:01:17.108958 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-vjwvm" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.455334 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rtrj2"] Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.456225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.458030 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-k5lb4" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.458648 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.458793 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.466653 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rtrj2"] Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.528146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n565k\" (UniqueName: \"kubernetes.io/projected/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c-kube-api-access-n565k\") pod \"openstack-operator-index-rtrj2\" (UID: \"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c\") " pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.629906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n565k\" (UniqueName: \"kubernetes.io/projected/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c-kube-api-access-n565k\") pod \"openstack-operator-index-rtrj2\" (UID: \"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c\") " pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.648727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n565k\" (UniqueName: \"kubernetes.io/projected/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c-kube-api-access-n565k\") pod \"openstack-operator-index-rtrj2\" (UID: \"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c\") " pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.776185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:20 crc kubenswrapper[4760]: I1227 06:01:20.982461 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rtrj2"] Dec 27 06:01:20 crc kubenswrapper[4760]: W1227 06:01:20.983914 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f7ee26_0e95_499e_a7bd_ec65dba4ba7c.slice/crio-aded90f2ca1dd69e0254ef56252771fa817340cbd2d62a5be4cbfcc58831bb59 WatchSource:0}: Error finding container aded90f2ca1dd69e0254ef56252771fa817340cbd2d62a5be4cbfcc58831bb59: Status 404 returned error can't find the container with id aded90f2ca1dd69e0254ef56252771fa817340cbd2d62a5be4cbfcc58831bb59 Dec 27 06:01:21 crc kubenswrapper[4760]: I1227 06:01:21.063202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtrj2" event={"ID":"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c","Type":"ContainerStarted","Data":"aded90f2ca1dd69e0254ef56252771fa817340cbd2d62a5be4cbfcc58831bb59"} Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.042308 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rtrj2"] Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.645770 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8ml8h"] Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.648399 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.650641 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8ml8h"] Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.693046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6f5j\" (UniqueName: \"kubernetes.io/projected/a55331c6-2f6f-4a43-b6e0-69e5be40f28c-kube-api-access-j6f5j\") pod \"openstack-operator-index-8ml8h\" (UID: \"a55331c6-2f6f-4a43-b6e0-69e5be40f28c\") " pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.794996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6f5j\" (UniqueName: \"kubernetes.io/projected/a55331c6-2f6f-4a43-b6e0-69e5be40f28c-kube-api-access-j6f5j\") pod \"openstack-operator-index-8ml8h\" (UID: \"a55331c6-2f6f-4a43-b6e0-69e5be40f28c\") " pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.816836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6f5j\" (UniqueName: \"kubernetes.io/projected/a55331c6-2f6f-4a43-b6e0-69e5be40f28c-kube-api-access-j6f5j\") pod \"openstack-operator-index-8ml8h\" (UID: \"a55331c6-2f6f-4a43-b6e0-69e5be40f28c\") " pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:24 crc kubenswrapper[4760]: I1227 06:01:24.964996 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:25 crc kubenswrapper[4760]: I1227 06:01:25.453326 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8ml8h"] Dec 27 06:01:25 crc kubenswrapper[4760]: W1227 06:01:25.466759 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55331c6_2f6f_4a43_b6e0_69e5be40f28c.slice/crio-62c01096e2b4b0927deab5153340d5136524542325870a38359df68548467df8 WatchSource:0}: Error finding container 62c01096e2b4b0927deab5153340d5136524542325870a38359df68548467df8: Status 404 returned error can't find the container with id 62c01096e2b4b0927deab5153340d5136524542325870a38359df68548467df8 Dec 27 06:01:26 crc kubenswrapper[4760]: I1227 06:01:26.100225 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8ml8h" event={"ID":"a55331c6-2f6f-4a43-b6e0-69e5be40f28c","Type":"ContainerStarted","Data":"62c01096e2b4b0927deab5153340d5136524542325870a38359df68548467df8"} Dec 27 06:01:35 crc kubenswrapper[4760]: I1227 06:01:35.288355 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:01:35 crc kubenswrapper[4760]: I1227 06:01:35.289155 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:01:40 crc kubenswrapper[4760]: I1227 06:01:40.217081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtrj2" event={"ID":"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c","Type":"ContainerStarted","Data":"dcd09ea3d8f1b71ec06d9e3c6499220adc93750011828ff56e112b78d7308cf1"} Dec 27 06:01:40 crc kubenswrapper[4760]: I1227 06:01:40.217236 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rtrj2" podUID="33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" containerName="registry-server" containerID="cri-o://dcd09ea3d8f1b71ec06d9e3c6499220adc93750011828ff56e112b78d7308cf1" gracePeriod=2 Dec 27 06:01:40 crc kubenswrapper[4760]: I1227 06:01:40.219368 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8ml8h" event={"ID":"a55331c6-2f6f-4a43-b6e0-69e5be40f28c","Type":"ContainerStarted","Data":"b3b73dcc28b4e0b0a9cb6ae1bf12d60cd12735063649164d064dd6ef6c68eed0"} Dec 27 06:01:40 crc kubenswrapper[4760]: I1227 06:01:40.233244 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rtrj2" podStartSLOduration=2.2516935240000002 podStartE2EDuration="20.233214669s" podCreationTimestamp="2025-12-27 06:01:20 +0000 UTC" firstStartedPulling="2025-12-27 06:01:20.986695278 +0000 UTC m=+1003.746764593" lastFinishedPulling="2025-12-27 06:01:38.968216413 +0000 UTC m=+1021.728285738" observedRunningTime="2025-12-27 06:01:40.231754312 +0000 UTC m=+1022.991823627" watchObservedRunningTime="2025-12-27 06:01:40.233214669 +0000 UTC m=+1022.993283984" Dec 27 06:01:40 crc kubenswrapper[4760]: I1227 06:01:40.776937 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.235921 4760 generic.go:334] "Generic (PLEG): container finished" podID="33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" containerID="dcd09ea3d8f1b71ec06d9e3c6499220adc93750011828ff56e112b78d7308cf1" exitCode=0 Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.236628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtrj2" event={"ID":"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c","Type":"ContainerDied","Data":"dcd09ea3d8f1b71ec06d9e3c6499220adc93750011828ff56e112b78d7308cf1"} Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.304396 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.334264 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8ml8h" podStartSLOduration=3.850381057 podStartE2EDuration="17.334220042s" podCreationTimestamp="2025-12-27 06:01:24 +0000 UTC" firstStartedPulling="2025-12-27 06:01:25.469711448 +0000 UTC m=+1008.229780763" lastFinishedPulling="2025-12-27 06:01:38.953550413 +0000 UTC m=+1021.713619748" observedRunningTime="2025-12-27 06:01:40.256378013 +0000 UTC m=+1023.016447368" watchObservedRunningTime="2025-12-27 06:01:41.334220042 +0000 UTC m=+1024.094289367" Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.426826 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n565k\" (UniqueName: \"kubernetes.io/projected/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c-kube-api-access-n565k\") pod \"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c\" (UID: \"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c\") " Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.435746 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c-kube-api-access-n565k" (OuterVolumeSpecName: "kube-api-access-n565k") pod "33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" (UID: "33f7ee26-0e95-499e-a7bd-ec65dba4ba7c"). InnerVolumeSpecName "kube-api-access-n565k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:01:41 crc kubenswrapper[4760]: I1227 06:01:41.528205 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n565k\" (UniqueName: \"kubernetes.io/projected/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c-kube-api-access-n565k\") on node \"crc\" DevicePath \"\"" Dec 27 06:01:42 crc kubenswrapper[4760]: I1227 06:01:42.245055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rtrj2" event={"ID":"33f7ee26-0e95-499e-a7bd-ec65dba4ba7c","Type":"ContainerDied","Data":"aded90f2ca1dd69e0254ef56252771fa817340cbd2d62a5be4cbfcc58831bb59"} Dec 27 06:01:42 crc kubenswrapper[4760]: I1227 06:01:42.245170 4760 scope.go:117] "RemoveContainer" containerID="dcd09ea3d8f1b71ec06d9e3c6499220adc93750011828ff56e112b78d7308cf1" Dec 27 06:01:42 crc kubenswrapper[4760]: I1227 06:01:42.245176 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rtrj2" Dec 27 06:01:42 crc kubenswrapper[4760]: I1227 06:01:42.271515 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rtrj2"] Dec 27 06:01:42 crc kubenswrapper[4760]: I1227 06:01:42.276535 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rtrj2"] Dec 27 06:01:43 crc kubenswrapper[4760]: I1227 06:01:43.510396 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" path="/var/lib/kubelet/pods/33f7ee26-0e95-499e-a7bd-ec65dba4ba7c/volumes" Dec 27 06:01:44 crc kubenswrapper[4760]: I1227 06:01:44.965687 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:44 crc kubenswrapper[4760]: I1227 06:01:44.966070 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:45 crc kubenswrapper[4760]: I1227 06:01:45.006300 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:45 crc kubenswrapper[4760]: I1227 06:01:45.306316 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8ml8h" Dec 27 06:01:52 crc kubenswrapper[4760]: I1227 06:01:52.920605 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f"] Dec 27 06:01:52 crc kubenswrapper[4760]: E1227 06:01:52.921237 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" containerName="registry-server" Dec 27 06:01:52 crc kubenswrapper[4760]: I1227 06:01:52.921250 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" containerName="registry-server" Dec 27 06:01:52 crc kubenswrapper[4760]: I1227 06:01:52.921360 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f7ee26-0e95-499e-a7bd-ec65dba4ba7c" containerName="registry-server" Dec 27 06:01:52 crc kubenswrapper[4760]: I1227 06:01:52.922195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:52 crc kubenswrapper[4760]: I1227 06:01:52.925157 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xnnf4" Dec 27 06:01:52 crc kubenswrapper[4760]: I1227 06:01:52.927572 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f"] Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.106141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9w8l\" (UniqueName: \"kubernetes.io/projected/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-kube-api-access-l9w8l\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.106193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-util\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.106244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-bundle\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.208406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9w8l\" (UniqueName: \"kubernetes.io/projected/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-kube-api-access-l9w8l\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.208504 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-util\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.208560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-bundle\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.209373 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-bundle\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.209450 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-util\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.232679 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9w8l\" (UniqueName: \"kubernetes.io/projected/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-kube-api-access-l9w8l\") pod \"c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.244889 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:01:53 crc kubenswrapper[4760]: I1227 06:01:53.545225 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f"] Dec 27 06:01:54 crc kubenswrapper[4760]: I1227 06:01:54.337848 4760 generic.go:334] "Generic (PLEG): container finished" podID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerID="522b2282ccd84618c53f41a20ff2857ab259a44d619df5d89e1f965cc19e3993" exitCode=0 Dec 27 06:01:54 crc kubenswrapper[4760]: I1227 06:01:54.337916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" event={"ID":"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc","Type":"ContainerDied","Data":"522b2282ccd84618c53f41a20ff2857ab259a44d619df5d89e1f965cc19e3993"} Dec 27 06:01:54 crc kubenswrapper[4760]: I1227 06:01:54.337959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" event={"ID":"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc","Type":"ContainerStarted","Data":"574d9e676ac03b52c8a59e6d981e11411cb6eb7886ae0d06874d9eac413fe8d5"} Dec 27 06:02:01 crc kubenswrapper[4760]: I1227 06:02:01.396823 4760 generic.go:334] "Generic (PLEG): container finished" podID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerID="778c6a92079d17ecf1c6ef03d7e3f9b27174825652e63d984a072fb0feadcd34" exitCode=0 Dec 27 06:02:01 crc kubenswrapper[4760]: I1227 06:02:01.396922 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" event={"ID":"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc","Type":"ContainerDied","Data":"778c6a92079d17ecf1c6ef03d7e3f9b27174825652e63d984a072fb0feadcd34"} Dec 27 06:02:02 crc kubenswrapper[4760]: I1227 06:02:02.408790 4760 generic.go:334] "Generic (PLEG): container finished" podID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerID="bf3e4a449082e4c86d85a33b4d5683c0d2dc86647b0f76b816245a46fd955425" exitCode=0 Dec 27 06:02:02 crc kubenswrapper[4760]: I1227 06:02:02.408885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" event={"ID":"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc","Type":"ContainerDied","Data":"bf3e4a449082e4c86d85a33b4d5683c0d2dc86647b0f76b816245a46fd955425"} Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.723128 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.870691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9w8l\" (UniqueName: \"kubernetes.io/projected/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-kube-api-access-l9w8l\") pod \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.870826 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-bundle\") pod \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.870880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-util\") pod \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\" (UID: \"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc\") " Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.872035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-bundle" (OuterVolumeSpecName: "bundle") pod "d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" (UID: "d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.872680 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.880467 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-kube-api-access-l9w8l" (OuterVolumeSpecName: "kube-api-access-l9w8l") pod "d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" (UID: "d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc"). InnerVolumeSpecName "kube-api-access-l9w8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.886780 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-util" (OuterVolumeSpecName: "util") pod "d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" (UID: "d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.973765 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9w8l\" (UniqueName: \"kubernetes.io/projected/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-kube-api-access-l9w8l\") on node \"crc\" DevicePath \"\"" Dec 27 06:02:03 crc kubenswrapper[4760]: I1227 06:02:03.973812 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc-util\") on node \"crc\" DevicePath \"\"" Dec 27 06:02:04 crc kubenswrapper[4760]: I1227 06:02:04.427406 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" event={"ID":"d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc","Type":"ContainerDied","Data":"574d9e676ac03b52c8a59e6d981e11411cb6eb7886ae0d06874d9eac413fe8d5"} Dec 27 06:02:04 crc kubenswrapper[4760]: I1227 06:02:04.427480 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574d9e676ac03b52c8a59e6d981e11411cb6eb7886ae0d06874d9eac413fe8d5" Dec 27 06:02:04 crc kubenswrapper[4760]: I1227 06:02:04.427516 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f" Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.287488 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.287574 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.287646 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.288554 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"250ddace2814c860fabbd9e2871a90feb8c2c9cfa534ed4a183058d24b4ec76d"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.288681 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://250ddace2814c860fabbd9e2871a90feb8c2c9cfa534ed4a183058d24b4ec76d" gracePeriod=600 Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.437190 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="250ddace2814c860fabbd9e2871a90feb8c2c9cfa534ed4a183058d24b4ec76d" exitCode=0 Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.437247 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"250ddace2814c860fabbd9e2871a90feb8c2c9cfa534ed4a183058d24b4ec76d"} Dec 27 06:02:05 crc kubenswrapper[4760]: I1227 06:02:05.437289 4760 scope.go:117] "RemoveContainer" containerID="8bdfa6927c42e93e91cee65f61297025ba0cd7530f28a85f4fd103174d5484b4" Dec 27 06:02:06 crc kubenswrapper[4760]: I1227 06:02:06.449772 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"3c94525c7f3f2fe0472a9dc16fbdb91fa7f9227d81da63569e521eb20968b308"} Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.570769 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz"] Dec 27 06:02:10 crc kubenswrapper[4760]: E1227 06:02:10.571754 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="util" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.571772 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="util" Dec 27 06:02:10 crc kubenswrapper[4760]: E1227 06:02:10.571787 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="extract" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.571799 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="extract" Dec 27 06:02:10 crc kubenswrapper[4760]: E1227 06:02:10.571813 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="pull" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.571824 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="pull" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.571998 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc" containerName="extract" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.572691 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.576350 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-bfq74" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.604456 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz"] Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.670381 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68pf\" (UniqueName: \"kubernetes.io/projected/1e1dd6a2-f891-4b2f-9128-8935e8445bc0-kube-api-access-l68pf\") pod \"openstack-operator-controller-operator-7956f678b6-cxfvz\" (UID: \"1e1dd6a2-f891-4b2f-9128-8935e8445bc0\") " pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.771032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68pf\" (UniqueName: \"kubernetes.io/projected/1e1dd6a2-f891-4b2f-9128-8935e8445bc0-kube-api-access-l68pf\") pod \"openstack-operator-controller-operator-7956f678b6-cxfvz\" (UID: \"1e1dd6a2-f891-4b2f-9128-8935e8445bc0\") " pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.800221 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68pf\" (UniqueName: \"kubernetes.io/projected/1e1dd6a2-f891-4b2f-9128-8935e8445bc0-kube-api-access-l68pf\") pod \"openstack-operator-controller-operator-7956f678b6-cxfvz\" (UID: \"1e1dd6a2-f891-4b2f-9128-8935e8445bc0\") " pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:10 crc kubenswrapper[4760]: I1227 06:02:10.890724 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:11 crc kubenswrapper[4760]: I1227 06:02:11.150718 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz"] Dec 27 06:02:11 crc kubenswrapper[4760]: W1227 06:02:11.161107 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e1dd6a2_f891_4b2f_9128_8935e8445bc0.slice/crio-ecdd2345e46857ef9cc47094ebfde4825e2f6a1c58453a2db8171c95f3767e0a WatchSource:0}: Error finding container ecdd2345e46857ef9cc47094ebfde4825e2f6a1c58453a2db8171c95f3767e0a: Status 404 returned error can't find the container with id ecdd2345e46857ef9cc47094ebfde4825e2f6a1c58453a2db8171c95f3767e0a Dec 27 06:02:11 crc kubenswrapper[4760]: I1227 06:02:11.484366 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" event={"ID":"1e1dd6a2-f891-4b2f-9128-8935e8445bc0","Type":"ContainerStarted","Data":"ecdd2345e46857ef9cc47094ebfde4825e2f6a1c58453a2db8171c95f3767e0a"} Dec 27 06:02:19 crc kubenswrapper[4760]: I1227 06:02:19.536141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" event={"ID":"1e1dd6a2-f891-4b2f-9128-8935e8445bc0","Type":"ContainerStarted","Data":"7020c57cf6735267c558eefc400dd17ba2987e2d29d66b12f70326b205a50749"} Dec 27 06:02:19 crc kubenswrapper[4760]: I1227 06:02:19.536735 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:19 crc kubenswrapper[4760]: I1227 06:02:19.579175 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" podStartSLOduration=2.10910816 podStartE2EDuration="9.579159175s" podCreationTimestamp="2025-12-27 06:02:10 +0000 UTC" firstStartedPulling="2025-12-27 06:02:11.165809366 +0000 UTC m=+1053.925878681" lastFinishedPulling="2025-12-27 06:02:18.635860381 +0000 UTC m=+1061.395929696" observedRunningTime="2025-12-27 06:02:19.577404472 +0000 UTC m=+1062.337473787" watchObservedRunningTime="2025-12-27 06:02:19.579159175 +0000 UTC m=+1062.339228490" Dec 27 06:02:30 crc kubenswrapper[4760]: I1227 06:02:30.893847 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.289279 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.290362 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.296218 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-qbhrp" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.302204 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.307036 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.313524 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.325300 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cpr5c" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.342629 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.345920 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.346923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.350080 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ghpsb" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.357525 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-688f464774-5hjlr"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.358749 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.363137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-fdjxf" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.368816 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.374169 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.374882 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.379446 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-z58n4" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.403867 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.428530 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.429225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.444642 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vjwvj" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.444749 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.445786 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.448174 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6qxgm" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.454554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzwnd\" (UniqueName: \"kubernetes.io/projected/ea3b319d-0327-447a-a3fb-b872f98c5e99-kube-api-access-qzwnd\") pod \"cinder-operator-controller-manager-5f98b4754f-km8pb\" (UID: \"ea3b319d-0327-447a-a3fb-b872f98c5e99\") " pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.454594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlnx\" (UniqueName: \"kubernetes.io/projected/73020260-4c3a-4bd2-8749-58d052d076e3-kube-api-access-knlnx\") pod \"barbican-operator-controller-manager-568d76f566-slgbn\" (UID: \"73020260-4c3a-4bd2-8749-58d052d076e3\") " pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.454620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-548pz\" (UniqueName: \"kubernetes.io/projected/0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2-kube-api-access-548pz\") pod \"designate-operator-controller-manager-66f8b87655-ggw2m\" (UID: \"0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.454645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j469r\" (UniqueName: \"kubernetes.io/projected/177b0c02-1f5b-4315-b854-5465123ebcab-kube-api-access-j469r\") pod \"glance-operator-controller-manager-688f464774-5hjlr\" (UID: \"177b0c02-1f5b-4315-b854-5465123ebcab\") " pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.454865 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.478209 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-688f464774-5hjlr"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.507208 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.552172 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.552939 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.557484 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h7mtm" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlnx\" (UniqueName: \"kubernetes.io/projected/73020260-4c3a-4bd2-8749-58d052d076e3-kube-api-access-knlnx\") pod \"barbican-operator-controller-manager-568d76f566-slgbn\" (UID: \"73020260-4c3a-4bd2-8749-58d052d076e3\") " pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-548pz\" (UniqueName: \"kubernetes.io/projected/0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2-kube-api-access-548pz\") pod \"designate-operator-controller-manager-66f8b87655-ggw2m\" (UID: \"0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558577 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvzg\" (UniqueName: \"kubernetes.io/projected/11fdabd0-f272-435e-86cf-a3fe7343eb0f-kube-api-access-slvzg\") pod \"heat-operator-controller-manager-658dd65b86-4fmwm\" (UID: \"11fdabd0-f272-435e-86cf-a3fe7343eb0f\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j469r\" (UniqueName: \"kubernetes.io/projected/177b0c02-1f5b-4315-b854-5465123ebcab-kube-api-access-j469r\") pod \"glance-operator-controller-manager-688f464774-5hjlr\" (UID: \"177b0c02-1f5b-4315-b854-5465123ebcab\") " pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558631 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdmz\" (UniqueName: \"kubernetes.io/projected/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-kube-api-access-jhdmz\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhq84\" (UniqueName: \"kubernetes.io/projected/d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d-kube-api-access-jhq84\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-q4p4p\" (UID: \"d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558695 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.558715 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzwnd\" (UniqueName: \"kubernetes.io/projected/ea3b319d-0327-447a-a3fb-b872f98c5e99-kube-api-access-qzwnd\") pod \"cinder-operator-controller-manager-5f98b4754f-km8pb\" (UID: \"ea3b319d-0327-447a-a3fb-b872f98c5e99\") " pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.627525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzwnd\" (UniqueName: \"kubernetes.io/projected/ea3b319d-0327-447a-a3fb-b872f98c5e99-kube-api-access-qzwnd\") pod \"cinder-operator-controller-manager-5f98b4754f-km8pb\" (UID: \"ea3b319d-0327-447a-a3fb-b872f98c5e99\") " pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.630166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.636053 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-548pz\" (UniqueName: \"kubernetes.io/projected/0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2-kube-api-access-548pz\") pod \"designate-operator-controller-manager-66f8b87655-ggw2m\" (UID: \"0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.637927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlnx\" (UniqueName: \"kubernetes.io/projected/73020260-4c3a-4bd2-8749-58d052d076e3-kube-api-access-knlnx\") pod \"barbican-operator-controller-manager-568d76f566-slgbn\" (UID: \"73020260-4c3a-4bd2-8749-58d052d076e3\") " pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.663162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slvzg\" (UniqueName: \"kubernetes.io/projected/11fdabd0-f272-435e-86cf-a3fe7343eb0f-kube-api-access-slvzg\") pod \"heat-operator-controller-manager-658dd65b86-4fmwm\" (UID: \"11fdabd0-f272-435e-86cf-a3fe7343eb0f\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.663569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdmz\" (UniqueName: \"kubernetes.io/projected/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-kube-api-access-jhdmz\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.663599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2-kube-api-access-hhj5p\") pod \"ironic-operator-controller-manager-f99f54bc8-kss4t\" (UID: \"fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.663639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhq84\" (UniqueName: \"kubernetes.io/projected/d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d-kube-api-access-jhq84\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-q4p4p\" (UID: \"d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.663673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:50 crc kubenswrapper[4760]: E1227 06:02:50.664006 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:50 crc kubenswrapper[4760]: E1227 06:02:50.664101 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert podName:f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8 nodeName:}" failed. No retries permitted until 2025-12-27 06:02:51.16405828 +0000 UTC m=+1093.924127595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert") pod "infra-operator-controller-manager-6d99759cf-vb95d" (UID: "f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8") : secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.664808 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.666756 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.673598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j469r\" (UniqueName: \"kubernetes.io/projected/177b0c02-1f5b-4315-b854-5465123ebcab-kube-api-access-j469r\") pod \"glance-operator-controller-manager-688f464774-5hjlr\" (UID: \"177b0c02-1f5b-4315-b854-5465123ebcab\") " pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.688124 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhq84\" (UniqueName: \"kubernetes.io/projected/d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d-kube-api-access-jhq84\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-q4p4p\" (UID: \"d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.688566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.722121 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.726789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdmz\" (UniqueName: \"kubernetes.io/projected/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-kube-api-access-jhdmz\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.730687 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.733593 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slvzg\" (UniqueName: \"kubernetes.io/projected/11fdabd0-f272-435e-86cf-a3fe7343eb0f-kube-api-access-slvzg\") pod \"heat-operator-controller-manager-658dd65b86-4fmwm\" (UID: \"11fdabd0-f272-435e-86cf-a3fe7343eb0f\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.736504 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.745835 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dcm8j" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.754930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.756375 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-q874z"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.757238 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.767759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2-kube-api-access-hhj5p\") pod \"ironic-operator-controller-manager-f99f54bc8-kss4t\" (UID: \"fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.771027 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9sr6t" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.795216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-q874z"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.800878 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.810315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhj5p\" (UniqueName: \"kubernetes.io/projected/fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2-kube-api-access-hhj5p\") pod \"ironic-operator-controller-manager-f99f54bc8-kss4t\" (UID: \"fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.813698 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.814580 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.829352 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mn9t9" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.833256 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.840561 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.845337 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.851211 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.852021 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.852250 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6jwz9" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.854445 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dvnwq" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.868833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqfh\" (UniqueName: \"kubernetes.io/projected/160a8b25-a031-4204-870f-2385ceaaf80e-kube-api-access-ptqfh\") pod \"manila-operator-controller-manager-5fdd9786f7-674bs\" (UID: \"160a8b25-a031-4204-870f-2385ceaaf80e\") " pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.868904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv79t\" (UniqueName: \"kubernetes.io/projected/0d86a798-bc69-4423-96ce-dc1b4fd03bc8-kube-api-access-xv79t\") pod \"keystone-operator-controller-manager-568985c78-q874z\" (UID: \"0d86a798-bc69-4423-96ce-dc1b4fd03bc8\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.868944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2x4q\" (UniqueName: \"kubernetes.io/projected/0bc672f2-6342-4276-a266-c6bbd7f1896c-kube-api-access-f2x4q\") pod \"mariadb-operator-controller-manager-6d59c96c98-ksz65\" (UID: \"0bc672f2-6342-4276-a266-c6bbd7f1896c\") " pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.869035 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.877428 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.898796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.905871 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.908931 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.914658 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mz7f2" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.916478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.919341 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.927659 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.933660 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.936482 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.939035 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.939276 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gwnlg" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.949756 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.950838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.954147 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-57l79" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.961816 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.963705 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.968746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-d7xjs" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.969731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q77p\" (UniqueName: \"kubernetes.io/projected/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96-kube-api-access-2q77p\") pod \"nova-operator-controller-manager-7fd66c86cd-f46xs\" (UID: \"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96\") " pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.969783 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv79t\" (UniqueName: \"kubernetes.io/projected/0d86a798-bc69-4423-96ce-dc1b4fd03bc8-kube-api-access-xv79t\") pod \"keystone-operator-controller-manager-568985c78-q874z\" (UID: \"0d86a798-bc69-4423-96ce-dc1b4fd03bc8\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.969844 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2x4q\" (UniqueName: \"kubernetes.io/projected/0bc672f2-6342-4276-a266-c6bbd7f1896c-kube-api-access-f2x4q\") pod \"mariadb-operator-controller-manager-6d59c96c98-ksz65\" (UID: \"0bc672f2-6342-4276-a266-c6bbd7f1896c\") " pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.969878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh95h\" (UniqueName: \"kubernetes.io/projected/733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1-kube-api-access-kh95h\") pod \"octavia-operator-controller-manager-68c649d9d-zjrv7\" (UID: \"733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.969927 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtdd\" (UniqueName: \"kubernetes.io/projected/4de77998-df1d-4b52-8bbe-3cb9b35356fd-kube-api-access-pbtdd\") pod \"neutron-operator-controller-manager-7cd87b778f-2rdv8\" (UID: \"4de77998-df1d-4b52-8bbe-3cb9b35356fd\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.969963 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptqfh\" (UniqueName: \"kubernetes.io/projected/160a8b25-a031-4204-870f-2385ceaaf80e-kube-api-access-ptqfh\") pod \"manila-operator-controller-manager-5fdd9786f7-674bs\" (UID: \"160a8b25-a031-4204-870f-2385ceaaf80e\") " pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.979253 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.988715 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.990365 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq"] Dec 27 06:02:50 crc kubenswrapper[4760]: I1227 06:02:50.993814 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b95nb" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.001971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptqfh\" (UniqueName: \"kubernetes.io/projected/160a8b25-a031-4204-870f-2385ceaaf80e-kube-api-access-ptqfh\") pod \"manila-operator-controller-manager-5fdd9786f7-674bs\" (UID: \"160a8b25-a031-4204-870f-2385ceaaf80e\") " pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.004040 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.004391 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.010166 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.015491 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.017874 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.020614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv79t\" (UniqueName: \"kubernetes.io/projected/0d86a798-bc69-4423-96ce-dc1b4fd03bc8-kube-api-access-xv79t\") pod \"keystone-operator-controller-manager-568985c78-q874z\" (UID: \"0d86a798-bc69-4423-96ce-dc1b4fd03bc8\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.028070 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m6bdx" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.029476 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.037233 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.038124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.038791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2x4q\" (UniqueName: \"kubernetes.io/projected/0bc672f2-6342-4276-a266-c6bbd7f1896c-kube-api-access-f2x4q\") pod \"mariadb-operator-controller-manager-6d59c96c98-ksz65\" (UID: \"0bc672f2-6342-4276-a266-c6bbd7f1896c\") " pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.042179 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dpjpc" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.044062 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8hw\" (UniqueName: \"kubernetes.io/projected/6c7e09e0-9c92-40fa-99e1-b510ab43fb39-kube-api-access-fn8hw\") pod \"swift-operator-controller-manager-bb586bbf4-g4z7d\" (UID: \"6c7e09e0-9c92-40fa-99e1-b510ab43fb39\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070819 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhmj\" (UniqueName: \"kubernetes.io/projected/3250a06b-b268-405c-b891-7657e4818fe8-kube-api-access-rrhmj\") pod \"telemetry-operator-controller-manager-5fbb89d79f-gwngw\" (UID: \"3250a06b-b268-405c-b891-7657e4818fe8\") " pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070838 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q77p\" (UniqueName: \"kubernetes.io/projected/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96-kube-api-access-2q77p\") pod \"nova-operator-controller-manager-7fd66c86cd-f46xs\" (UID: \"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96\") " pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070901 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5t9\" (UniqueName: \"kubernetes.io/projected/e4d879d7-4ae6-4499-89ed-98f48e4f0541-kube-api-access-dw5t9\") pod \"ovn-operator-controller-manager-bf6d4f946-7gh59\" (UID: \"e4d879d7-4ae6-4499-89ed-98f48e4f0541\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tfq\" (UniqueName: \"kubernetes.io/projected/ed40f1e4-9823-4b11-b48e-4f4019a3796c-kube-api-access-54tfq\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070962 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcbt\" (UniqueName: \"kubernetes.io/projected/d8cc7b7c-b6b1-4eab-b0bb-616b227dc790-kube-api-access-bkcbt\") pod \"placement-operator-controller-manager-7fdbb74498-gnjnq\" (UID: \"d8cc7b7c-b6b1-4eab-b0bb-616b227dc790\") " pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.070990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh95h\" (UniqueName: \"kubernetes.io/projected/733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1-kube-api-access-kh95h\") pod \"octavia-operator-controller-manager-68c649d9d-zjrv7\" (UID: \"733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.071027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtdd\" (UniqueName: \"kubernetes.io/projected/4de77998-df1d-4b52-8bbe-3cb9b35356fd-kube-api-access-pbtdd\") pod \"neutron-operator-controller-manager-7cd87b778f-2rdv8\" (UID: \"4de77998-df1d-4b52-8bbe-3cb9b35356fd\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.075890 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.076881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.078525 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-djx26" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.094477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.096627 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q77p\" (UniqueName: \"kubernetes.io/projected/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96-kube-api-access-2q77p\") pod \"nova-operator-controller-manager-7fd66c86cd-f46xs\" (UID: \"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96\") " pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.101318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbtdd\" (UniqueName: \"kubernetes.io/projected/4de77998-df1d-4b52-8bbe-3cb9b35356fd-kube-api-access-pbtdd\") pod \"neutron-operator-controller-manager-7cd87b778f-2rdv8\" (UID: \"4de77998-df1d-4b52-8bbe-3cb9b35356fd\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.106935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh95h\" (UniqueName: \"kubernetes.io/projected/733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1-kube-api-access-kh95h\") pod \"octavia-operator-controller-manager-68c649d9d-zjrv7\" (UID: \"733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.133898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.159364 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.160341 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.167204 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.167516 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.167784 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.168198 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5zm2r" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171730 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrhmj\" (UniqueName: \"kubernetes.io/projected/3250a06b-b268-405c-b891-7657e4818fe8-kube-api-access-rrhmj\") pod \"telemetry-operator-controller-manager-5fbb89d79f-gwngw\" (UID: \"3250a06b-b268-405c-b891-7657e4818fe8\") " pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171781 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5t9\" (UniqueName: \"kubernetes.io/projected/e4d879d7-4ae6-4499-89ed-98f48e4f0541-kube-api-access-dw5t9\") pod \"ovn-operator-controller-manager-bf6d4f946-7gh59\" (UID: \"e4d879d7-4ae6-4499-89ed-98f48e4f0541\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tfq\" (UniqueName: \"kubernetes.io/projected/ed40f1e4-9823-4b11-b48e-4f4019a3796c-kube-api-access-54tfq\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171842 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtcg\" (UniqueName: \"kubernetes.io/projected/f7aa22fd-6d4a-484b-9814-3e8e8766a423-kube-api-access-thtcg\") pod \"test-operator-controller-manager-74f84d69b6-vvjqd\" (UID: \"f7aa22fd-6d4a-484b-9814-3e8e8766a423\") " pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcbt\" (UniqueName: \"kubernetes.io/projected/d8cc7b7c-b6b1-4eab-b0bb-616b227dc790-kube-api-access-bkcbt\") pod \"placement-operator-controller-manager-7fdbb74498-gnjnq\" (UID: \"d8cc7b7c-b6b1-4eab-b0bb-616b227dc790\") " pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkr64\" (UniqueName: \"kubernetes.io/projected/f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf-kube-api-access-tkr64\") pod \"watcher-operator-controller-manager-57d64f56b7-g5mgk\" (UID: \"f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf\") " pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.171945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8hw\" (UniqueName: \"kubernetes.io/projected/6c7e09e0-9c92-40fa-99e1-b510ab43fb39-kube-api-access-fn8hw\") pod \"swift-operator-controller-manager-bb586bbf4-g4z7d\" (UID: \"6c7e09e0-9c92-40fa-99e1-b510ab43fb39\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.172335 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.172374 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert podName:ed40f1e4-9823-4b11-b48e-4f4019a3796c nodeName:}" failed. No retries permitted until 2025-12-27 06:02:51.672359471 +0000 UTC m=+1094.432428776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" (UID: "ed40f1e4-9823-4b11-b48e-4f4019a3796c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.172720 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.172743 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert podName:f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8 nodeName:}" failed. No retries permitted until 2025-12-27 06:02:52.17273592 +0000 UTC m=+1094.932805235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert") pod "infra-operator-controller-manager-6d99759cf-vb95d" (UID: "f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8") : secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.186372 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.197384 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5t9\" (UniqueName: \"kubernetes.io/projected/e4d879d7-4ae6-4499-89ed-98f48e4f0541-kube-api-access-dw5t9\") pod \"ovn-operator-controller-manager-bf6d4f946-7gh59\" (UID: \"e4d879d7-4ae6-4499-89ed-98f48e4f0541\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.199109 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.199786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8hw\" (UniqueName: \"kubernetes.io/projected/6c7e09e0-9c92-40fa-99e1-b510ab43fb39-kube-api-access-fn8hw\") pod \"swift-operator-controller-manager-bb586bbf4-g4z7d\" (UID: \"6c7e09e0-9c92-40fa-99e1-b510ab43fb39\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.200373 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrhmj\" (UniqueName: \"kubernetes.io/projected/3250a06b-b268-405c-b891-7657e4818fe8-kube-api-access-rrhmj\") pod \"telemetry-operator-controller-manager-5fbb89d79f-gwngw\" (UID: \"3250a06b-b268-405c-b891-7657e4818fe8\") " pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.201151 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcbt\" (UniqueName: \"kubernetes.io/projected/d8cc7b7c-b6b1-4eab-b0bb-616b227dc790-kube-api-access-bkcbt\") pod \"placement-operator-controller-manager-7fdbb74498-gnjnq\" (UID: \"d8cc7b7c-b6b1-4eab-b0bb-616b227dc790\") " pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.201202 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.202168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.204791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tfq\" (UniqueName: \"kubernetes.io/projected/ed40f1e4-9823-4b11-b48e-4f4019a3796c-kube-api-access-54tfq\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.205069 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-h6wgf" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.228225 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.228570 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.236800 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.261131 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.272760 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclwq\" (UniqueName: \"kubernetes.io/projected/b0d5f45a-ddba-4d84-b567-17193cfaef2b-kube-api-access-kclwq\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.272840 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.272874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkr64\" (UniqueName: \"kubernetes.io/projected/f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf-kube-api-access-tkr64\") pod \"watcher-operator-controller-manager-57d64f56b7-g5mgk\" (UID: \"f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf\") " pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.272893 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g774g\" (UniqueName: \"kubernetes.io/projected/4aa3ec72-b438-4ef7-a213-0b6148aed51b-kube-api-access-g774g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ps2r8\" (UID: \"4aa3ec72-b438-4ef7-a213-0b6148aed51b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.272921 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.272985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtcg\" (UniqueName: \"kubernetes.io/projected/f7aa22fd-6d4a-484b-9814-3e8e8766a423-kube-api-access-thtcg\") pod \"test-operator-controller-manager-74f84d69b6-vvjqd\" (UID: \"f7aa22fd-6d4a-484b-9814-3e8e8766a423\") " pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.292949 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkr64\" (UniqueName: \"kubernetes.io/projected/f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf-kube-api-access-tkr64\") pod \"watcher-operator-controller-manager-57d64f56b7-g5mgk\" (UID: \"f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf\") " pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.293776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtcg\" (UniqueName: \"kubernetes.io/projected/f7aa22fd-6d4a-484b-9814-3e8e8766a423-kube-api-access-thtcg\") pod \"test-operator-controller-manager-74f84d69b6-vvjqd\" (UID: \"f7aa22fd-6d4a-484b-9814-3e8e8766a423\") " pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.329302 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.335716 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.352529 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.356470 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-688f464774-5hjlr"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.373612 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.374199 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclwq\" (UniqueName: \"kubernetes.io/projected/b0d5f45a-ddba-4d84-b567-17193cfaef2b-kube-api-access-kclwq\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.374236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.374277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g774g\" (UniqueName: \"kubernetes.io/projected/4aa3ec72-b438-4ef7-a213-0b6148aed51b-kube-api-access-g774g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ps2r8\" (UID: \"4aa3ec72-b438-4ef7-a213-0b6148aed51b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.374327 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.374500 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.374549 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:51.874533564 +0000 UTC m=+1094.634602879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.374576 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.374627 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:51.874609776 +0000 UTC m=+1094.634679091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "metrics-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.390529 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 27 06:02:51 crc kubenswrapper[4760]: W1227 06:02:51.397300 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177b0c02_1f5b_4315_b854_5465123ebcab.slice/crio-fc3faeb056649f3f4abbc44eedad1f9699b39c809a0fb09c63232641c3b5b722 WatchSource:0}: Error finding container fc3faeb056649f3f4abbc44eedad1f9699b39c809a0fb09c63232641c3b5b722: Status 404 returned error can't find the container with id fc3faeb056649f3f4abbc44eedad1f9699b39c809a0fb09c63232641c3b5b722 Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.401354 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.401901 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclwq\" (UniqueName: \"kubernetes.io/projected/b0d5f45a-ddba-4d84-b567-17193cfaef2b-kube-api-access-kclwq\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.406206 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g774g\" (UniqueName: \"kubernetes.io/projected/4aa3ec72-b438-4ef7-a213-0b6148aed51b-kube-api-access-g774g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ps2r8\" (UID: \"4aa3ec72-b438-4ef7-a213-0b6148aed51b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.424331 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.469166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.544246 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.544470 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p"] Dec 27 06:02:51 crc kubenswrapper[4760]: W1227 06:02:51.547286 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3b319d_0327_447a_a3fb_b872f98c5e99.slice/crio-02eb3440688a4918627cf39446a3c0cf278cccc6fc8485538e5338ad1e9753fd WatchSource:0}: Error finding container 02eb3440688a4918627cf39446a3c0cf278cccc6fc8485538e5338ad1e9753fd: Status 404 returned error can't find the container with id 02eb3440688a4918627cf39446a3c0cf278cccc6fc8485538e5338ad1e9753fd Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.548374 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.638966 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.649620 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.675219 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.681388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.681613 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.681664 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert podName:ed40f1e4-9823-4b11-b48e-4f4019a3796c nodeName:}" failed. No retries permitted until 2025-12-27 06:02:52.681649394 +0000 UTC m=+1095.441718709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" (UID: "ed40f1e4-9823-4b11-b48e-4f4019a3796c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.796228 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.814158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" event={"ID":"d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d","Type":"ContainerStarted","Data":"e3b795a19c63c575379998f6fc4407bc3bf6b3e5ef6ed1ed98cb9a54aff754bc"} Dec 27 06:02:51 crc kubenswrapper[4760]: W1227 06:02:51.815992 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod160a8b25_a031_4204_870f_2385ceaaf80e.slice/crio-743310b85584ffb2ec7fda2d2b3b6ab5f0438daa6833b20bae8c5b8248b7b9f8 WatchSource:0}: Error finding container 743310b85584ffb2ec7fda2d2b3b6ab5f0438daa6833b20bae8c5b8248b7b9f8: Status 404 returned error can't find the container with id 743310b85584ffb2ec7fda2d2b3b6ab5f0438daa6833b20bae8c5b8248b7b9f8 Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.816110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" event={"ID":"177b0c02-1f5b-4315-b854-5465123ebcab","Type":"ContainerStarted","Data":"fc3faeb056649f3f4abbc44eedad1f9699b39c809a0fb09c63232641c3b5b722"} Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.818723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" event={"ID":"11fdabd0-f272-435e-86cf-a3fe7343eb0f","Type":"ContainerStarted","Data":"fe49a0b394c5ddab1d5afec34fa2f126761baf7e81a81a3d8a6b51f99ae13b29"} Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.820287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" event={"ID":"0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2","Type":"ContainerStarted","Data":"de0dff0e4825d029b66a98168a97ad60280ecbdee7306059248b84fe86bc0946"} Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.822869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" event={"ID":"fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2","Type":"ContainerStarted","Data":"f5a4df63790a05aad6bffec5498a27167f54daae69308ac3da9a212f1bf8ad1f"} Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.824007 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" event={"ID":"ea3b319d-0327-447a-a3fb-b872f98c5e99","Type":"ContainerStarted","Data":"02eb3440688a4918627cf39446a3c0cf278cccc6fc8485538e5338ad1e9753fd"} Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.825797 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" event={"ID":"73020260-4c3a-4bd2-8749-58d052d076e3","Type":"ContainerStarted","Data":"4984ac79e64eb445eebc9ca091fae6e82f31d41ca8c7e4b7bd1b5cb2450b2de5"} Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.884940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.885325 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.885159 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.885523 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:52.88550961 +0000 UTC m=+1095.645578925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "metrics-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.885476 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: E1227 06:02:51.885829 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:52.885820957 +0000 UTC m=+1095.645890272 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "webhook-server-cert" not found Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.908962 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-q874z"] Dec 27 06:02:51 crc kubenswrapper[4760]: I1227 06:02:51.927562 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs"] Dec 27 06:02:51 crc kubenswrapper[4760]: W1227 06:02:51.930099 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d86a798_bc69_4423_96ce_dc1b4fd03bc8.slice/crio-3b104bd3897d61e40e554061dc1093d2cc8e6896b05b398d0678fcf40cc8a730 WatchSource:0}: Error finding container 3b104bd3897d61e40e554061dc1093d2cc8e6896b05b398d0678fcf40cc8a730: Status 404 returned error can't find the container with id 3b104bd3897d61e40e554061dc1093d2cc8e6896b05b398d0678fcf40cc8a730 Dec 27 06:02:51 crc kubenswrapper[4760]: W1227 06:02:51.951273 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d46fdfd_85d1_4aa7_a8bc_0ec2c7eb6f96.slice/crio-252bc2a7324305d7a588eef8e4d4f39aabd84cb3aa8a8fa20ecc03636a01ec31 WatchSource:0}: Error finding container 252bc2a7324305d7a588eef8e4d4f39aabd84cb3aa8a8fa20ecc03636a01ec31: Status 404 returned error can't find the container with id 252bc2a7324305d7a588eef8e4d4f39aabd84cb3aa8a8fa20ecc03636a01ec31 Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.020872 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59"] Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.029194 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65"] Dec 27 06:02:52 crc kubenswrapper[4760]: W1227 06:02:52.031895 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4d879d7_4ae6_4499_89ed_98f48e4f0541.slice/crio-8883aaca5e4ba28994e2657a2bf62308fa2d5fb1ca8fb904664ce2e5865a8c17 WatchSource:0}: Error finding container 8883aaca5e4ba28994e2657a2bf62308fa2d5fb1ca8fb904664ce2e5865a8c17: Status 404 returned error can't find the container with id 8883aaca5e4ba28994e2657a2bf62308fa2d5fb1ca8fb904664ce2e5865a8c17 Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.060586 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8"] Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.065163 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd"] Dec 27 06:02:52 crc kubenswrapper[4760]: W1227 06:02:52.074852 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7aa22fd_6d4a_484b_9814_3e8e8766a423.slice/crio-b0b8549d30d7158bbe61ebe095643fc62fadddb382a001e0d5ee07dcdfb72c0b WatchSource:0}: Error finding container b0b8549d30d7158bbe61ebe095643fc62fadddb382a001e0d5ee07dcdfb72c0b: Status 404 returned error can't find the container with id b0b8549d30d7158bbe61ebe095643fc62fadddb382a001e0d5ee07dcdfb72c0b Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.081744 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbtdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-2rdv8_openstack-operators(4de77998-df1d-4b52-8bbe-3cb9b35356fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.083036 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" podUID="4de77998-df1d-4b52-8bbe-3cb9b35356fd" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.184853 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d"] Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.190272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.190494 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.190549 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert podName:f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8 nodeName:}" failed. No retries permitted until 2025-12-27 06:02:54.190530969 +0000 UTC m=+1096.950600304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert") pod "infra-operator-controller-manager-6d99759cf-vb95d" (UID: "f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8") : secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.202848 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq"] Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.216323 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7"] Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.222017 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fn8hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-g4z7d_openstack-operators(6c7e09e0-9c92-40fa-99e1-b510ab43fb39): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.223294 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" podUID="6c7e09e0-9c92-40fa-99e1-b510ab43fb39" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.223922 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw"] Dec 27 06:02:52 crc kubenswrapper[4760]: W1227 06:02:52.227125 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3250a06b_b268_405c_b891_7657e4818fe8.slice/crio-74173715e116df192b8c688276f2909642fb8035fda139f5a85569d1aaf04953 WatchSource:0}: Error finding container 74173715e116df192b8c688276f2909642fb8035fda139f5a85569d1aaf04953: Status 404 returned error can't find the container with id 74173715e116df192b8c688276f2909642fb8035fda139f5a85569d1aaf04953 Dec 27 06:02:52 crc kubenswrapper[4760]: W1227 06:02:52.228172 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod733f56b3_c5b1_4f9c_a35c_bbde9c68ebf1.slice/crio-a4bc0591454cf3a1e1a1cc01f8149ccd8e9393a2da6231358c19a12ce2ec2b09 WatchSource:0}: Error finding container a4bc0591454cf3a1e1a1cc01f8149ccd8e9393a2da6231358c19a12ce2ec2b09: Status 404 returned error can't find the container with id a4bc0591454cf3a1e1a1cc01f8149ccd8e9393a2da6231358c19a12ce2ec2b09 Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.233789 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:918169c851c9b8e1b5b8c921c828d34abb035cd56c63aadeee439dca9f1eae12,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrhmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fbb89d79f-gwngw_openstack-operators(3250a06b-b268-405c-b891-7657e4818fe8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.234253 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kh95h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-zjrv7_openstack-operators(733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.235447 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" podUID="3250a06b-b268-405c-b891-7657e4818fe8" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.235455 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" podUID="733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.311984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8"] Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.322732 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk"] Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.342841 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:e617a3f6d1014f6114411f69f3b61ea1b399de157d74022696054fef62a2262a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkr64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-57d64f56b7-g5mgk_openstack-operators(f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.344099 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" podUID="f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.368828 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g774g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ps2r8_openstack-operators(4aa3ec72-b438-4ef7-a213-0b6148aed51b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.370649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" podUID="4aa3ec72-b438-4ef7-a213-0b6148aed51b" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.697809 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.698000 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.698080 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert podName:ed40f1e4-9823-4b11-b48e-4f4019a3796c nodeName:}" failed. No retries permitted until 2025-12-27 06:02:54.698056779 +0000 UTC m=+1097.458126094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" (UID: "ed40f1e4-9823-4b11-b48e-4f4019a3796c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.843314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" event={"ID":"4aa3ec72-b438-4ef7-a213-0b6148aed51b","Type":"ContainerStarted","Data":"b0efb60c29afc00c8aa8c3575cb266e168cb617671cf57795836e6a07008f22d"} Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.844989 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" podUID="4aa3ec72-b438-4ef7-a213-0b6148aed51b" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.845858 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" event={"ID":"733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1","Type":"ContainerStarted","Data":"a4bc0591454cf3a1e1a1cc01f8149ccd8e9393a2da6231358c19a12ce2ec2b09"} Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.850695 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" podUID="733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.855789 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" event={"ID":"0d86a798-bc69-4423-96ce-dc1b4fd03bc8","Type":"ContainerStarted","Data":"3b104bd3897d61e40e554061dc1093d2cc8e6896b05b398d0678fcf40cc8a730"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.863585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" event={"ID":"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96","Type":"ContainerStarted","Data":"252bc2a7324305d7a588eef8e4d4f39aabd84cb3aa8a8fa20ecc03636a01ec31"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.866970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" event={"ID":"d8cc7b7c-b6b1-4eab-b0bb-616b227dc790","Type":"ContainerStarted","Data":"39483b59d994180296badad6bdc6de0dfa98f8e7f8d718c37ec5ceb8040b6463"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.871623 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" event={"ID":"f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf","Type":"ContainerStarted","Data":"c91243b90f2816db7eb60f7d394e1a9d1da2548c12a0ef83fc34db6d1a2f2e35"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.874078 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" event={"ID":"3250a06b-b268-405c-b891-7657e4818fe8","Type":"ContainerStarted","Data":"74173715e116df192b8c688276f2909642fb8035fda139f5a85569d1aaf04953"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.875011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" event={"ID":"160a8b25-a031-4204-870f-2385ceaaf80e","Type":"ContainerStarted","Data":"743310b85584ffb2ec7fda2d2b3b6ab5f0438daa6833b20bae8c5b8248b7b9f8"} Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.881226 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:918169c851c9b8e1b5b8c921c828d34abb035cd56c63aadeee439dca9f1eae12\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" podUID="3250a06b-b268-405c-b891-7657e4818fe8" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.882976 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:e617a3f6d1014f6114411f69f3b61ea1b399de157d74022696054fef62a2262a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" podUID="f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.889857 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" event={"ID":"0bc672f2-6342-4276-a266-c6bbd7f1896c","Type":"ContainerStarted","Data":"a489c192605b2cdb3d660ae98afe3def199f41ef7482bbf05bcd71fe3c87e966"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.900907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.900976 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.901115 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.901161 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:54.901147135 +0000 UTC m=+1097.661216450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "webhook-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.901399 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.901470 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:54.901451513 +0000 UTC m=+1097.661520828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "metrics-server-cert" not found Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.915275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" event={"ID":"6c7e09e0-9c92-40fa-99e1-b510ab43fb39","Type":"ContainerStarted","Data":"b97c9c7d0c23743769722310fe3210d8cd85625dfa22a4c4f22ccb585d395740"} Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.917128 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" podUID="6c7e09e0-9c92-40fa-99e1-b510ab43fb39" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.924930 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" event={"ID":"4de77998-df1d-4b52-8bbe-3cb9b35356fd","Type":"ContainerStarted","Data":"f839610c3bad2c8769cbb9a0a2eec9a22fbcf38d97c32da54bca66fb07834882"} Dec 27 06:02:52 crc kubenswrapper[4760]: E1227 06:02:52.925820 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" podUID="4de77998-df1d-4b52-8bbe-3cb9b35356fd" Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.926576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" event={"ID":"f7aa22fd-6d4a-484b-9814-3e8e8766a423","Type":"ContainerStarted","Data":"b0b8549d30d7158bbe61ebe095643fc62fadddb382a001e0d5ee07dcdfb72c0b"} Dec 27 06:02:52 crc kubenswrapper[4760]: I1227 06:02:52.927701 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" event={"ID":"e4d879d7-4ae6-4499-89ed-98f48e4f0541","Type":"ContainerStarted","Data":"8883aaca5e4ba28994e2657a2bf62308fa2d5fb1ca8fb904664ce2e5865a8c17"} Dec 27 06:02:53 crc kubenswrapper[4760]: E1227 06:02:53.939889 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" podUID="6c7e09e0-9c92-40fa-99e1-b510ab43fb39" Dec 27 06:02:53 crc kubenswrapper[4760]: E1227 06:02:53.939894 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:918169c851c9b8e1b5b8c921c828d34abb035cd56c63aadeee439dca9f1eae12\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" podUID="3250a06b-b268-405c-b891-7657e4818fe8" Dec 27 06:02:53 crc kubenswrapper[4760]: E1227 06:02:53.939943 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:e617a3f6d1014f6114411f69f3b61ea1b399de157d74022696054fef62a2262a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" podUID="f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf" Dec 27 06:02:53 crc kubenswrapper[4760]: E1227 06:02:53.940753 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" podUID="4aa3ec72-b438-4ef7-a213-0b6148aed51b" Dec 27 06:02:53 crc kubenswrapper[4760]: E1227 06:02:53.940797 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" podUID="733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1" Dec 27 06:02:53 crc kubenswrapper[4760]: E1227 06:02:53.940920 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" podUID="4de77998-df1d-4b52-8bbe-3cb9b35356fd" Dec 27 06:02:54 crc kubenswrapper[4760]: I1227 06:02:54.224468 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.224615 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.224692 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert podName:f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8 nodeName:}" failed. No retries permitted until 2025-12-27 06:02:58.224675271 +0000 UTC m=+1100.984744586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert") pod "infra-operator-controller-manager-6d99759cf-vb95d" (UID: "f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8") : secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: I1227 06:02:54.730436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.730578 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.730629 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert podName:ed40f1e4-9823-4b11-b48e-4f4019a3796c nodeName:}" failed. No retries permitted until 2025-12-27 06:02:58.730615982 +0000 UTC m=+1101.490685297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" (UID: "ed40f1e4-9823-4b11-b48e-4f4019a3796c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: I1227 06:02:54.933227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:54 crc kubenswrapper[4760]: I1227 06:02:54.933294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.933455 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.933539 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:58.933521334 +0000 UTC m=+1101.693590649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "metrics-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.933473 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 27 06:02:54 crc kubenswrapper[4760]: E1227 06:02:54.933646 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:02:58.933609346 +0000 UTC m=+1101.693678661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "webhook-server-cert" not found Dec 27 06:02:58 crc kubenswrapper[4760]: I1227 06:02:58.294189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:02:58 crc kubenswrapper[4760]: E1227 06:02:58.294415 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:58 crc kubenswrapper[4760]: E1227 06:02:58.294689 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert podName:f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8 nodeName:}" failed. No retries permitted until 2025-12-27 06:03:06.294670617 +0000 UTC m=+1109.054739932 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert") pod "infra-operator-controller-manager-6d99759cf-vb95d" (UID: "f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8") : secret "infra-operator-webhook-server-cert" not found Dec 27 06:02:58 crc kubenswrapper[4760]: I1227 06:02:58.802006 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:02:58 crc kubenswrapper[4760]: E1227 06:02:58.802199 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:58 crc kubenswrapper[4760]: E1227 06:02:58.802346 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert podName:ed40f1e4-9823-4b11-b48e-4f4019a3796c nodeName:}" failed. No retries permitted until 2025-12-27 06:03:06.802320901 +0000 UTC m=+1109.562390216 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" (UID: "ed40f1e4-9823-4b11-b48e-4f4019a3796c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:02:59 crc kubenswrapper[4760]: I1227 06:02:59.005309 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:59 crc kubenswrapper[4760]: I1227 06:02:59.005383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:02:59 crc kubenswrapper[4760]: E1227 06:02:59.005515 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 27 06:02:59 crc kubenswrapper[4760]: E1227 06:02:59.005556 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 27 06:02:59 crc kubenswrapper[4760]: E1227 06:02:59.005621 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:03:07.005602252 +0000 UTC m=+1109.765671567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "webhook-server-cert" not found Dec 27 06:02:59 crc kubenswrapper[4760]: E1227 06:02:59.005660 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:03:07.005633473 +0000 UTC m=+1109.765702848 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "metrics-server-cert" not found Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.022057 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" event={"ID":"11fdabd0-f272-435e-86cf-a3fe7343eb0f","Type":"ContainerStarted","Data":"7645c26d4be6da978c47d69e856c664e4f92b3ecf1e4b21089738bc97ed48fd4"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.023262 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.024973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" event={"ID":"fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2","Type":"ContainerStarted","Data":"f2588a6e05699cf3723a42e89309d467e689e78ef59426ba30e21b223594fcab"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.025231 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.026569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" event={"ID":"0d86a798-bc69-4423-96ce-dc1b4fd03bc8","Type":"ContainerStarted","Data":"e275b530cd3f7cb9761335bb0755baef0b6039175a85e4199938237660d23641"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.026890 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.028007 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" event={"ID":"177b0c02-1f5b-4315-b854-5465123ebcab","Type":"ContainerStarted","Data":"9f617c7dc6bf32c08e7a29aa503452e1a1a92901430310cc59b536be0f858bd2"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.028336 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.029154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" event={"ID":"160a8b25-a031-4204-870f-2385ceaaf80e","Type":"ContainerStarted","Data":"a54a85ef47d5a3bfdb15794c851e4fb2e3a04f312657b11e0455b922307723e4"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.029449 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.030322 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" event={"ID":"e4d879d7-4ae6-4499-89ed-98f48e4f0541","Type":"ContainerStarted","Data":"bab9ab18a5e82019ee58e789b6c77f43fe725335f845c1f8aeab46e0011c6697"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.030647 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.037997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" event={"ID":"0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2","Type":"ContainerStarted","Data":"98ffcbefa530671e04ad6e7205c5ad3a50fceb5672805ffbf7f2a1539886483a"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.038628 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.039276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" event={"ID":"d8cc7b7c-b6b1-4eab-b0bb-616b227dc790","Type":"ContainerStarted","Data":"6900be9ae134ad908eee41a885504ada8e8eb53354eb298a19ddc975d7c07a79"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.039849 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.040580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" event={"ID":"f7aa22fd-6d4a-484b-9814-3e8e8766a423","Type":"ContainerStarted","Data":"d0c4b37c316d9c379b5c38a32928e7dd9b4a19710d86c45cd53a585012cf5c7c"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.040925 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.041768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" event={"ID":"d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d","Type":"ContainerStarted","Data":"54e987749a89f0c8efe2bb66fa93e4d7ef560214f702cbcf9f9489003d1d6ac7"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.041840 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.042849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" event={"ID":"73020260-4c3a-4bd2-8749-58d052d076e3","Type":"ContainerStarted","Data":"07ee4cdaebe22c0a7647cba24b32b3ec82aaf1d243e68df6ebdd8c897b29605e"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.043744 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.044573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" event={"ID":"0bc672f2-6342-4276-a266-c6bbd7f1896c","Type":"ContainerStarted","Data":"0e775e86b83d3adcdc5a07a406e09a9f38c5a33aebdce005c7130b459e79dfca"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.045245 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.046214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" event={"ID":"ea3b319d-0327-447a-a3fb-b872f98c5e99","Type":"ContainerStarted","Data":"26cc483cf771200e60404a094a2a2e5e46e9bae31467e25615fc1e69abe9f703"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.046498 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.048037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" event={"ID":"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96","Type":"ContainerStarted","Data":"ed5966ab3b8a08423f1798d85257e02a9eeb75992ee70c4520674277555e482b"} Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.048345 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.084938 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" podStartSLOduration=2.826332153 podStartE2EDuration="15.084917556s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.768705989 +0000 UTC m=+1094.528775304" lastFinishedPulling="2025-12-27 06:03:04.027291362 +0000 UTC m=+1106.787360707" observedRunningTime="2025-12-27 06:03:05.0430239 +0000 UTC m=+1107.803093225" watchObservedRunningTime="2025-12-27 06:03:05.084917556 +0000 UTC m=+1107.844986871" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.096723 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" podStartSLOduration=2.893983737 podStartE2EDuration="15.096701808s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.818790208 +0000 UTC m=+1094.578859523" lastFinishedPulling="2025-12-27 06:03:04.021508239 +0000 UTC m=+1106.781577594" observedRunningTime="2025-12-27 06:03:05.088983947 +0000 UTC m=+1107.849053262" watchObservedRunningTime="2025-12-27 06:03:05.096701808 +0000 UTC m=+1107.856771193" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.210411 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" podStartSLOduration=3.225249986 podStartE2EDuration="15.210391812s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.041223123 +0000 UTC m=+1094.801292428" lastFinishedPulling="2025-12-27 06:03:04.026364899 +0000 UTC m=+1106.786434254" observedRunningTime="2025-12-27 06:03:05.205448919 +0000 UTC m=+1107.965518234" watchObservedRunningTime="2025-12-27 06:03:05.210391812 +0000 UTC m=+1107.970461127" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.211479 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" podStartSLOduration=3.059854883 podStartE2EDuration="15.211475418s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.931673653 +0000 UTC m=+1094.691742968" lastFinishedPulling="2025-12-27 06:03:04.083294188 +0000 UTC m=+1106.843363503" observedRunningTime="2025-12-27 06:03:05.131374596 +0000 UTC m=+1107.891443911" watchObservedRunningTime="2025-12-27 06:03:05.211475418 +0000 UTC m=+1107.971544733" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.232983 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" podStartSLOduration=3.432196078 podStartE2EDuration="15.232968301s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.220698765 +0000 UTC m=+1094.980768090" lastFinishedPulling="2025-12-27 06:03:04.021470958 +0000 UTC m=+1106.781540313" observedRunningTime="2025-12-27 06:03:05.229695239 +0000 UTC m=+1107.989764544" watchObservedRunningTime="2025-12-27 06:03:05.232968301 +0000 UTC m=+1107.993037616" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.270033 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" podStartSLOduration=2.638506095 podStartE2EDuration="15.270018487s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.389953506 +0000 UTC m=+1094.150022821" lastFinishedPulling="2025-12-27 06:03:04.021465898 +0000 UTC m=+1106.781535213" observedRunningTime="2025-12-27 06:03:05.267150966 +0000 UTC m=+1108.027220281" watchObservedRunningTime="2025-12-27 06:03:05.270018487 +0000 UTC m=+1108.030087802" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.297940 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" podStartSLOduration=3.074194217 podStartE2EDuration="15.297924628s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.75581766 +0000 UTC m=+1094.515886975" lastFinishedPulling="2025-12-27 06:03:03.979548061 +0000 UTC m=+1106.739617386" observedRunningTime="2025-12-27 06:03:05.296245677 +0000 UTC m=+1108.056314992" watchObservedRunningTime="2025-12-27 06:03:05.297924628 +0000 UTC m=+1108.057993943" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.329113 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" podStartSLOduration=3.430072776 podStartE2EDuration="15.329097879s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.081129002 +0000 UTC m=+1094.841198327" lastFinishedPulling="2025-12-27 06:03:03.980154105 +0000 UTC m=+1106.740223430" observedRunningTime="2025-12-27 06:03:05.324700051 +0000 UTC m=+1108.084769356" watchObservedRunningTime="2025-12-27 06:03:05.329097879 +0000 UTC m=+1108.089167194" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.348250 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" podStartSLOduration=2.738547231 podStartE2EDuration="15.348232903s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.411845538 +0000 UTC m=+1094.171914853" lastFinishedPulling="2025-12-27 06:03:04.02153118 +0000 UTC m=+1106.781600525" observedRunningTime="2025-12-27 06:03:05.343046064 +0000 UTC m=+1108.103115379" watchObservedRunningTime="2025-12-27 06:03:05.348232903 +0000 UTC m=+1108.108302208" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.360396 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" podStartSLOduration=2.933584208 podStartE2EDuration="15.360379744s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.552721584 +0000 UTC m=+1094.312790899" lastFinishedPulling="2025-12-27 06:03:03.97951711 +0000 UTC m=+1106.739586435" observedRunningTime="2025-12-27 06:03:05.359245516 +0000 UTC m=+1108.119314831" watchObservedRunningTime="2025-12-27 06:03:05.360379744 +0000 UTC m=+1108.120449059" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.375842 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" podStartSLOduration=3.387112331 podStartE2EDuration="15.375828516s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.041720975 +0000 UTC m=+1094.801790300" lastFinishedPulling="2025-12-27 06:03:04.03043717 +0000 UTC m=+1106.790506485" observedRunningTime="2025-12-27 06:03:05.372266578 +0000 UTC m=+1108.132335893" watchObservedRunningTime="2025-12-27 06:03:05.375828516 +0000 UTC m=+1108.135897831" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.399779 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" podStartSLOduration=2.9269871050000003 podStartE2EDuration="15.399761389s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.553531554 +0000 UTC m=+1094.313600869" lastFinishedPulling="2025-12-27 06:03:04.026305798 +0000 UTC m=+1106.786375153" observedRunningTime="2025-12-27 06:03:05.396531839 +0000 UTC m=+1108.156601154" watchObservedRunningTime="2025-12-27 06:03:05.399761389 +0000 UTC m=+1108.159830704" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.431174 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" podStartSLOduration=3.154016243 podStartE2EDuration="15.431157835s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.702422869 +0000 UTC m=+1094.462492184" lastFinishedPulling="2025-12-27 06:03:03.979564461 +0000 UTC m=+1106.739633776" observedRunningTime="2025-12-27 06:03:05.426046049 +0000 UTC m=+1108.186115364" watchObservedRunningTime="2025-12-27 06:03:05.431157835 +0000 UTC m=+1108.191227150" Dec 27 06:03:05 crc kubenswrapper[4760]: I1227 06:03:05.448818 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" podStartSLOduration=3.286393179 podStartE2EDuration="15.448801472s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:51.954311863 +0000 UTC m=+1094.714381178" lastFinishedPulling="2025-12-27 06:03:04.116720156 +0000 UTC m=+1106.876789471" observedRunningTime="2025-12-27 06:03:05.443311326 +0000 UTC m=+1108.203380641" watchObservedRunningTime="2025-12-27 06:03:05.448801472 +0000 UTC m=+1108.208870787" Dec 27 06:03:06 crc kubenswrapper[4760]: I1227 06:03:06.327913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:03:06 crc kubenswrapper[4760]: E1227 06:03:06.328068 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 27 06:03:06 crc kubenswrapper[4760]: E1227 06:03:06.328129 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert podName:f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8 nodeName:}" failed. No retries permitted until 2025-12-27 06:03:22.328114534 +0000 UTC m=+1125.088183849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert") pod "infra-operator-controller-manager-6d99759cf-vb95d" (UID: "f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8") : secret "infra-operator-webhook-server-cert" not found Dec 27 06:03:06 crc kubenswrapper[4760]: I1227 06:03:06.836995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:03:06 crc kubenswrapper[4760]: E1227 06:03:06.837168 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:03:06 crc kubenswrapper[4760]: E1227 06:03:06.837228 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert podName:ed40f1e4-9823-4b11-b48e-4f4019a3796c nodeName:}" failed. No retries permitted until 2025-12-27 06:03:22.837209784 +0000 UTC m=+1125.597279099 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" (UID: "ed40f1e4-9823-4b11-b48e-4f4019a3796c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 27 06:03:07 crc kubenswrapper[4760]: I1227 06:03:07.039950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:07 crc kubenswrapper[4760]: I1227 06:03:07.040137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:07 crc kubenswrapper[4760]: E1227 06:03:07.040169 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 27 06:03:07 crc kubenswrapper[4760]: E1227 06:03:07.040247 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:03:23.040230508 +0000 UTC m=+1125.800299833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "webhook-server-cert" not found Dec 27 06:03:07 crc kubenswrapper[4760]: E1227 06:03:07.040464 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 27 06:03:07 crc kubenswrapper[4760]: E1227 06:03:07.040715 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs podName:b0d5f45a-ddba-4d84-b567-17193cfaef2b nodeName:}" failed. No retries permitted until 2025-12-27 06:03:23.040633298 +0000 UTC m=+1125.800702663 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs") pod "openstack-operator-controller-manager-66d6f5c46d-cznpr" (UID: "b0d5f45a-ddba-4d84-b567-17193cfaef2b") : secret "metrics-server-cert" not found Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.088347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" event={"ID":"f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf","Type":"ContainerStarted","Data":"2cc13db8c293557420ddb0c3ba1ba2fa1827db3dca04c7c2660ba33868ab49d1"} Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.089917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.094116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" event={"ID":"3250a06b-b268-405c-b891-7657e4818fe8","Type":"ContainerStarted","Data":"d52a5fb2abff2b567a4bf0fd04d667c1678fab291576f4d6e70f076fd127a4df"} Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.094939 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.097735 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" event={"ID":"733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1","Type":"ContainerStarted","Data":"fa206a9907c1ae3c616cc5f98df5b11c6557bb77279e53779435cd8607d10818"} Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.098334 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.100985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" event={"ID":"6c7e09e0-9c92-40fa-99e1-b510ab43fb39","Type":"ContainerStarted","Data":"4a0f3544b0a0a951d6f97b2ea70147690b0adfd004ac10fa97b430eba3464c7a"} Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.101424 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.112422 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" podStartSLOduration=3.3704941010000002 podStartE2EDuration="20.11240414s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.342686024 +0000 UTC m=+1095.102755339" lastFinishedPulling="2025-12-27 06:03:09.084596073 +0000 UTC m=+1111.844665378" observedRunningTime="2025-12-27 06:03:10.105830847 +0000 UTC m=+1112.865900162" watchObservedRunningTime="2025-12-27 06:03:10.11240414 +0000 UTC m=+1112.872473455" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.124159 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" podStartSLOduration=2.596896996 podStartE2EDuration="20.12413447s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.233683737 +0000 UTC m=+1094.993753052" lastFinishedPulling="2025-12-27 06:03:09.760921211 +0000 UTC m=+1112.520990526" observedRunningTime="2025-12-27 06:03:10.119137077 +0000 UTC m=+1112.879206402" watchObservedRunningTime="2025-12-27 06:03:10.12413447 +0000 UTC m=+1112.884203795" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.137143 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" podStartSLOduration=2.627343428 podStartE2EDuration="20.137124871s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.234185389 +0000 UTC m=+1094.994254704" lastFinishedPulling="2025-12-27 06:03:09.743966832 +0000 UTC m=+1112.504036147" observedRunningTime="2025-12-27 06:03:10.133066112 +0000 UTC m=+1112.893135427" watchObservedRunningTime="2025-12-27 06:03:10.137124871 +0000 UTC m=+1112.897194186" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.148943 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" podStartSLOduration=3.753965311 podStartE2EDuration="20.148924234s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.221819543 +0000 UTC m=+1094.981888858" lastFinishedPulling="2025-12-27 06:03:08.616778466 +0000 UTC m=+1111.376847781" observedRunningTime="2025-12-27 06:03:10.146304119 +0000 UTC m=+1112.906373434" watchObservedRunningTime="2025-12-27 06:03:10.148924234 +0000 UTC m=+1112.908993559" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.633928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5f98b4754f-km8pb" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.678680 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-ggw2m" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.723462 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-688f464774-5hjlr" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.765644 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-q4p4p" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.909792 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-kss4t" Dec 27 06:03:10 crc kubenswrapper[4760]: I1227 06:03:10.927071 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-568d76f566-slgbn" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.007363 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-4fmwm" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.137853 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-674bs" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.186029 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-q874z" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.202187 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6d59c96c98-ksz65" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.244327 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.332116 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-7gh59" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.355764 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-7fdbb74498-gnjnq" Dec 27 06:03:11 crc kubenswrapper[4760]: I1227 06:03:11.427482 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" Dec 27 06:03:21 crc kubenswrapper[4760]: I1227 06:03:21.266642 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-zjrv7" Dec 27 06:03:21 crc kubenswrapper[4760]: I1227 06:03:21.376790 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-g4z7d" Dec 27 06:03:21 crc kubenswrapper[4760]: I1227 06:03:21.403973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fbb89d79f-gwngw" Dec 27 06:03:21 crc kubenswrapper[4760]: I1227 06:03:21.473148 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-57d64f56b7-g5mgk" Dec 27 06:03:22 crc kubenswrapper[4760]: E1227 06:03:22.393324 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 27 06:03:22 crc kubenswrapper[4760]: E1227 06:03:22.393563 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g774g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ps2r8_openstack-operators(4aa3ec72-b438-4ef7-a213-0b6148aed51b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:03:22 crc kubenswrapper[4760]: E1227 06:03:22.394856 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" podUID="4aa3ec72-b438-4ef7-a213-0b6148aed51b" Dec 27 06:03:22 crc kubenswrapper[4760]: I1227 06:03:22.395514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:03:22 crc kubenswrapper[4760]: I1227 06:03:22.406213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8-cert\") pod \"infra-operator-controller-manager-6d99759cf-vb95d\" (UID: \"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:03:22 crc kubenswrapper[4760]: I1227 06:03:22.574026 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:03:22 crc kubenswrapper[4760]: I1227 06:03:22.908912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:03:22 crc kubenswrapper[4760]: I1227 06:03:22.914856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed40f1e4-9823-4b11-b48e-4f4019a3796c-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz\" (UID: \"ed40f1e4-9823-4b11-b48e-4f4019a3796c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:03:22 crc kubenswrapper[4760]: E1227 06:03:22.985192 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 27 06:03:22 crc kubenswrapper[4760]: E1227 06:03:22.985628 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbtdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-2rdv8_openstack-operators(4de77998-df1d-4b52-8bbe-3cb9b35356fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:03:22 crc kubenswrapper[4760]: E1227 06:03:22.987420 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" podUID="4de77998-df1d-4b52-8bbe-3cb9b35356fd" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.091343 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.111316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.111473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.116983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-metrics-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.116983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0d5f45a-ddba-4d84-b567-17193cfaef2b-webhook-certs\") pod \"openstack-operator-controller-manager-66d6f5c46d-cznpr\" (UID: \"b0d5f45a-ddba-4d84-b567-17193cfaef2b\") " pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.179326 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d"] Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.205156 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" event={"ID":"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8","Type":"ContainerStarted","Data":"79e94517832cf36c847d76e53c4a891fbe2fd0168f199665feb0d7c08a0c197d"} Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.282526 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.302364 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz"] Dec 27 06:03:23 crc kubenswrapper[4760]: W1227 06:03:23.305346 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded40f1e4_9823_4b11_b48e_4f4019a3796c.slice/crio-16e95911674d19b5ac2de83a722fc2426b7a3d1c512c159d36a432f6a928bfea WatchSource:0}: Error finding container 16e95911674d19b5ac2de83a722fc2426b7a3d1c512c159d36a432f6a928bfea: Status 404 returned error can't find the container with id 16e95911674d19b5ac2de83a722fc2426b7a3d1c512c159d36a432f6a928bfea Dec 27 06:03:23 crc kubenswrapper[4760]: I1227 06:03:23.479431 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr"] Dec 27 06:03:23 crc kubenswrapper[4760]: W1227 06:03:23.486784 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d5f45a_ddba_4d84_b567_17193cfaef2b.slice/crio-e820d1f1a79c0072d65593929d9bbb65cb5b452483b8da0509ae940a672f2248 WatchSource:0}: Error finding container e820d1f1a79c0072d65593929d9bbb65cb5b452483b8da0509ae940a672f2248: Status 404 returned error can't find the container with id e820d1f1a79c0072d65593929d9bbb65cb5b452483b8da0509ae940a672f2248 Dec 27 06:03:24 crc kubenswrapper[4760]: I1227 06:03:24.213975 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" event={"ID":"b0d5f45a-ddba-4d84-b567-17193cfaef2b","Type":"ContainerStarted","Data":"5164c88419cac7c6f6a32a23e8f0475af11fc91d9abd813c557edca302497a0d"} Dec 27 06:03:24 crc kubenswrapper[4760]: I1227 06:03:24.214438 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:24 crc kubenswrapper[4760]: I1227 06:03:24.214452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" event={"ID":"b0d5f45a-ddba-4d84-b567-17193cfaef2b","Type":"ContainerStarted","Data":"e820d1f1a79c0072d65593929d9bbb65cb5b452483b8da0509ae940a672f2248"} Dec 27 06:03:24 crc kubenswrapper[4760]: I1227 06:03:24.215582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" event={"ID":"ed40f1e4-9823-4b11-b48e-4f4019a3796c","Type":"ContainerStarted","Data":"16e95911674d19b5ac2de83a722fc2426b7a3d1c512c159d36a432f6a928bfea"} Dec 27 06:03:24 crc kubenswrapper[4760]: I1227 06:03:24.258684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" podStartSLOduration=33.258657586 podStartE2EDuration="33.258657586s" podCreationTimestamp="2025-12-27 06:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:03:24.250177576 +0000 UTC m=+1127.010246891" watchObservedRunningTime="2025-12-27 06:03:24.258657586 +0000 UTC m=+1127.018726981" Dec 27 06:03:27 crc kubenswrapper[4760]: I1227 06:03:27.239293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" event={"ID":"f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8","Type":"ContainerStarted","Data":"96b39c46c00a6659b536832329a4ff4c672023fdee04bbbd41bf557ad3c828b6"} Dec 27 06:03:27 crc kubenswrapper[4760]: I1227 06:03:27.239738 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:03:27 crc kubenswrapper[4760]: I1227 06:03:27.244493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" event={"ID":"ed40f1e4-9823-4b11-b48e-4f4019a3796c","Type":"ContainerStarted","Data":"828bdbe127cd05b4fb508c5568c90ae84a8bfe9bbec2eaa5d748f95699af256c"} Dec 27 06:03:27 crc kubenswrapper[4760]: I1227 06:03:27.244676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:03:27 crc kubenswrapper[4760]: I1227 06:03:27.266658 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" podStartSLOduration=34.384294047 podStartE2EDuration="37.26663069s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:03:23.192712666 +0000 UTC m=+1125.952781981" lastFinishedPulling="2025-12-27 06:03:26.075049309 +0000 UTC m=+1128.835118624" observedRunningTime="2025-12-27 06:03:27.2622016 +0000 UTC m=+1130.022270975" watchObservedRunningTime="2025-12-27 06:03:27.26663069 +0000 UTC m=+1130.026700025" Dec 27 06:03:27 crc kubenswrapper[4760]: I1227 06:03:27.322244 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" podStartSLOduration=34.530823551 podStartE2EDuration="37.322221965s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:03:23.307972858 +0000 UTC m=+1126.068042193" lastFinishedPulling="2025-12-27 06:03:26.099371292 +0000 UTC m=+1128.859440607" observedRunningTime="2025-12-27 06:03:27.313814977 +0000 UTC m=+1130.073884302" watchObservedRunningTime="2025-12-27 06:03:27.322221965 +0000 UTC m=+1130.082291290" Dec 27 06:03:32 crc kubenswrapper[4760]: I1227 06:03:32.583826 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-vb95d" Dec 27 06:03:33 crc kubenswrapper[4760]: I1227 06:03:33.098643 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz" Dec 27 06:03:33 crc kubenswrapper[4760]: I1227 06:03:33.292992 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-66d6f5c46d-cznpr" Dec 27 06:03:33 crc kubenswrapper[4760]: E1227 06:03:33.505001 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" podUID="4aa3ec72-b438-4ef7-a213-0b6148aed51b" Dec 27 06:03:36 crc kubenswrapper[4760]: E1227 06:03:36.504675 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" podUID="4de77998-df1d-4b52-8bbe-3cb9b35356fd" Dec 27 06:03:47 crc kubenswrapper[4760]: I1227 06:03:47.412269 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" event={"ID":"4aa3ec72-b438-4ef7-a213-0b6148aed51b","Type":"ContainerStarted","Data":"2ff2e849e87d0876120ea49b8a073672489173daa10e39ae9226a315e55da976"} Dec 27 06:03:47 crc kubenswrapper[4760]: I1227 06:03:47.432162 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ps2r8" podStartSLOduration=2.242503874 podStartE2EDuration="56.432144807s" podCreationTimestamp="2025-12-27 06:02:51 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.368691778 +0000 UTC m=+1095.128761093" lastFinishedPulling="2025-12-27 06:03:46.558332671 +0000 UTC m=+1149.318402026" observedRunningTime="2025-12-27 06:03:47.427191684 +0000 UTC m=+1150.187260999" watchObservedRunningTime="2025-12-27 06:03:47.432144807 +0000 UTC m=+1150.192214122" Dec 27 06:03:49 crc kubenswrapper[4760]: I1227 06:03:49.428610 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" event={"ID":"4de77998-df1d-4b52-8bbe-3cb9b35356fd","Type":"ContainerStarted","Data":"4f5318db73b617a93ae3e0fae3e68b3a28cd6740b0f8541409ff3515688381a2"} Dec 27 06:03:49 crc kubenswrapper[4760]: I1227 06:03:49.430309 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:04:01 crc kubenswrapper[4760]: I1227 06:04:01.235758 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" Dec 27 06:04:01 crc kubenswrapper[4760]: I1227 06:04:01.264257 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-2rdv8" podStartSLOduration=14.348393949 podStartE2EDuration="1m11.264237262s" podCreationTimestamp="2025-12-27 06:02:50 +0000 UTC" firstStartedPulling="2025-12-27 06:02:52.081604643 +0000 UTC m=+1094.841673958" lastFinishedPulling="2025-12-27 06:03:48.997447956 +0000 UTC m=+1151.757517271" observedRunningTime="2025-12-27 06:03:49.452502318 +0000 UTC m=+1152.212571643" watchObservedRunningTime="2025-12-27 06:04:01.264237262 +0000 UTC m=+1164.024306587" Dec 27 06:04:05 crc kubenswrapper[4760]: I1227 06:04:05.288387 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:04:05 crc kubenswrapper[4760]: I1227 06:04:05.289326 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.315025 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.317270 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.322676 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.329490 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.331869 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.332099 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.332273 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.332389 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-jtjdd" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.332521 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.357542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457256 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/88c1643d-b9a1-49ac-aff2-39bff3918b3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457333 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/88c1643d-b9a1-49ac-aff2-39bff3918b3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457389 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tr2\" (UniqueName: \"kubernetes.io/projected/88c1643d-b9a1-49ac-aff2-39bff3918b3e-kube-api-access-b4tr2\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457429 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/88c1643d-b9a1-49ac-aff2-39bff3918b3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/88c1643d-b9a1-49ac-aff2-39bff3918b3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.457492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.558856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.558914 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/88c1643d-b9a1-49ac-aff2-39bff3918b3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.558938 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.558983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/88c1643d-b9a1-49ac-aff2-39bff3918b3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559497 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/88c1643d-b9a1-49ac-aff2-39bff3918b3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4tr2\" (UniqueName: \"kubernetes.io/projected/88c1643d-b9a1-49ac-aff2-39bff3918b3e-kube-api-access-b4tr2\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.559820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/88c1643d-b9a1-49ac-aff2-39bff3918b3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.560355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/88c1643d-b9a1-49ac-aff2-39bff3918b3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.560895 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/88c1643d-b9a1-49ac-aff2-39bff3918b3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.565425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/88c1643d-b9a1-49ac-aff2-39bff3918b3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.565650 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.565679 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94784501e376790c22adcf2e36a94679103db37136f6e2a671faf0f4cae5afde/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.566616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/88c1643d-b9a1-49ac-aff2-39bff3918b3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.574743 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/88c1643d-b9a1-49ac-aff2-39bff3918b3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.578481 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4tr2\" (UniqueName: \"kubernetes.io/projected/88c1643d-b9a1-49ac-aff2-39bff3918b3e-kube-api-access-b4tr2\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.616202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68fcef83-ed04-46e3-8b7d-919f02557fcf\") pod \"rabbitmq-server-0\" (UID: \"88c1643d-b9a1-49ac-aff2-39bff3918b3e\") " pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.635629 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.637002 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.639256 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.639304 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.639462 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.640822 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.641787 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-fkxqm" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.643499 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.673361 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.765932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c61bc46-c657-4109-ae26-5f2d02fcce40-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.765993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pwh\" (UniqueName: \"kubernetes.io/projected/9c61bc46-c657-4109-ae26-5f2d02fcce40-kube-api-access-98pwh\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766074 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c61bc46-c657-4109-ae26-5f2d02fcce40-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766119 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c61bc46-c657-4109-ae26-5f2d02fcce40-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766144 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c61bc46-c657-4109-ae26-5f2d02fcce40-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.766245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.868742 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98pwh\" (UniqueName: \"kubernetes.io/projected/9c61bc46-c657-4109-ae26-5f2d02fcce40-kube-api-access-98pwh\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c61bc46-c657-4109-ae26-5f2d02fcce40-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c61bc46-c657-4109-ae26-5f2d02fcce40-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869143 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c61bc46-c657-4109-ae26-5f2d02fcce40-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869201 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c61bc46-c657-4109-ae26-5f2d02fcce40-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.869274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.870711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.871567 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.873313 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c61bc46-c657-4109-ae26-5f2d02fcce40-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.876434 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.876472 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7f304cc4b3160bae97d0464eb801f6230b4ab2323035964fa2357e1329439acb/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.877077 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c61bc46-c657-4109-ae26-5f2d02fcce40-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.879710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c61bc46-c657-4109-ae26-5f2d02fcce40-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.880026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c61bc46-c657-4109-ae26-5f2d02fcce40-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.891356 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98pwh\" (UniqueName: \"kubernetes.io/projected/9c61bc46-c657-4109-ae26-5f2d02fcce40-kube-api-access-98pwh\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.914022 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.915071 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.919191 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.919504 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.919670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.919979 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.920135 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-5dmlq" Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.928444 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Dec 27 06:04:10 crc kubenswrapper[4760]: I1227 06:04:10.931913 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6d49d3d-db8c-4dcf-8536-ea4615a654c8\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85610243-e460-4c8c-9b48-49e555d574c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85610243-e460-4c8c-9b48-49e555d574c0\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/51ff80ae-27ca-4914-8c80-008f6d2d0860-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/51ff80ae-27ca-4914-8c80-008f6d2d0860-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073873 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/51ff80ae-27ca-4914-8c80-008f6d2d0860-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073907 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/51ff80ae-27ca-4914-8c80-008f6d2d0860-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.073935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbq7\" (UniqueName: \"kubernetes.io/projected/51ff80ae-27ca-4914-8c80-008f6d2d0860-kube-api-access-tmbq7\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.175694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.175811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/51ff80ae-27ca-4914-8c80-008f6d2d0860-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.175908 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.175982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.176028 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/51ff80ae-27ca-4914-8c80-008f6d2d0860-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.176084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/51ff80ae-27ca-4914-8c80-008f6d2d0860-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.176178 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/51ff80ae-27ca-4914-8c80-008f6d2d0860-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.176226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmbq7\" (UniqueName: \"kubernetes.io/projected/51ff80ae-27ca-4914-8c80-008f6d2d0860-kube-api-access-tmbq7\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.176321 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-85610243-e460-4c8c-9b48-49e555d574c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85610243-e460-4c8c-9b48-49e555d574c0\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.177632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.178502 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/51ff80ae-27ca-4914-8c80-008f6d2d0860-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.178568 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/51ff80ae-27ca-4914-8c80-008f6d2d0860-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.179201 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.179227 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-85610243-e460-4c8c-9b48-49e555d574c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85610243-e460-4c8c-9b48-49e555d574c0\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/19536fe1f9208997a7631ad7a57eaa4a13dced43b7a965c3e1e5dc0403a558c1/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.179635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.180827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/51ff80ae-27ca-4914-8c80-008f6d2d0860-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.180898 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/51ff80ae-27ca-4914-8c80-008f6d2d0860-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.182253 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/51ff80ae-27ca-4914-8c80-008f6d2d0860-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.198653 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmbq7\" (UniqueName: \"kubernetes.io/projected/51ff80ae-27ca-4914-8c80-008f6d2d0860-kube-api-access-tmbq7\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.218013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-85610243-e460-4c8c-9b48-49e555d574c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85610243-e460-4c8c-9b48-49e555d574c0\") pod \"rabbitmq-cell1-server-0\" (UID: \"51ff80ae-27ca-4914-8c80-008f6d2d0860\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.243905 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.660697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.661966 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.666127 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.666402 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.674692 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.679147 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.679543 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-c9nwh" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791571 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8z9p\" (UniqueName: \"kubernetes.io/projected/ed93a46f-1df6-4144-8487-08764749423a-kube-api-access-j8z9p\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ed93a46f-1df6-4144-8487-08764749423a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-config-data-default\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791837 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed93a46f-1df6-4144-8487-08764749423a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-kolla-config\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791889 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.791966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed93a46f-1df6-4144-8487-08764749423a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.792025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.893924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ed93a46f-1df6-4144-8487-08764749423a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-config-data-default\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed93a46f-1df6-4144-8487-08764749423a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894150 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-kolla-config\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed93a46f-1df6-4144-8487-08764749423a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.894358 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8z9p\" (UniqueName: \"kubernetes.io/projected/ed93a46f-1df6-4144-8487-08764749423a-kube-api-access-j8z9p\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.895787 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-config-data-default\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.895845 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-kolla-config\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.898165 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed93a46f-1df6-4144-8487-08764749423a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.898869 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ed93a46f-1df6-4144-8487-08764749423a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.898894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed93a46f-1df6-4144-8487-08764749423a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.899176 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.899208 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ded00d053108df2a5a036f2dd252f673ea179e50ab38f96b51656f847dc3104/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.901060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed93a46f-1df6-4144-8487-08764749423a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.920948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8z9p\" (UniqueName: \"kubernetes.io/projected/ed93a46f-1df6-4144-8487-08764749423a-kube-api-access-j8z9p\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.940060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a01a56f6-2643-47f4-ba9e-d10c5c13482b\") pod \"openstack-galera-0\" (UID: \"ed93a46f-1df6-4144-8487-08764749423a\") " pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:11.985362 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.083369 4760 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-f9q5z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.083467 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" podUID="67b40739-4e24-41b5-9d6a-7ab19939c81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.125483 4760 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-f9q5z container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.126647 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f9q5z" podUID="67b40739-4e24-41b5-9d6a-7ab19939c81c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.428283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c61bc46-c657-4109-ae26-5f2d02fcce40-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"9c61bc46-c657-4109-ae26-5f2d02fcce40\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.432162 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" podUID="f7aa22fd-6d4a-484b-9814-3e8e8766a423" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.432055 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-74f84d69b6-vvjqd" podUID="f7aa22fd-6d4a-484b-9814-3e8e8766a423" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.461446 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.535142 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.578682 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.644182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"88c1643d-b9a1-49ac-aff2-39bff3918b3e","Type":"ContainerStarted","Data":"4b302664780bcdbcc1dba1abbf5207f3afb66e4a97ebb3e7e5d92cda945f7457"} Dec 27 06:04:12 crc kubenswrapper[4760]: I1227 06:04:12.942909 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.033214 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Dec 27 06:04:13 crc kubenswrapper[4760]: W1227 06:04:13.044919 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded93a46f_1df6_4144_8487_08764749423a.slice/crio-7c2f4645c6a899a67b070f380483e3ed5891e151a30f9b89aeab7c44a23f4f87 WatchSource:0}: Error finding container 7c2f4645c6a899a67b070f380483e3ed5891e151a30f9b89aeab7c44a23f4f87: Status 404 returned error can't find the container with id 7c2f4645c6a899a67b070f380483e3ed5891e151a30f9b89aeab7c44a23f4f87 Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.048329 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.081905 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.086103 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.089934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-dwwqp" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.089943 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.100054 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.220347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fc09772-cca9-4ff6-88ae-b66171f0745f-config-data\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.220396 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fc09772-cca9-4ff6-88ae-b66171f0745f-kolla-config\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.220420 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmw9r\" (UniqueName: \"kubernetes.io/projected/6fc09772-cca9-4ff6-88ae-b66171f0745f-kube-api-access-hmw9r\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.321886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fc09772-cca9-4ff6-88ae-b66171f0745f-config-data\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.321942 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fc09772-cca9-4ff6-88ae-b66171f0745f-kolla-config\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.321964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmw9r\" (UniqueName: \"kubernetes.io/projected/6fc09772-cca9-4ff6-88ae-b66171f0745f-kube-api-access-hmw9r\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.323222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fc09772-cca9-4ff6-88ae-b66171f0745f-config-data\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.323450 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fc09772-cca9-4ff6-88ae-b66171f0745f-kolla-config\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.342875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmw9r\" (UniqueName: \"kubernetes.io/projected/6fc09772-cca9-4ff6-88ae-b66171f0745f-kube-api-access-hmw9r\") pod \"memcached-0\" (UID: \"6fc09772-cca9-4ff6-88ae-b66171f0745f\") " pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.414965 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.651012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"ed93a46f-1df6-4144-8487-08764749423a","Type":"ContainerStarted","Data":"7c2f4645c6a899a67b070f380483e3ed5891e151a30f9b89aeab7c44a23f4f87"} Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.652263 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"9c61bc46-c657-4109-ae26-5f2d02fcce40","Type":"ContainerStarted","Data":"d86b300895568b231e803255a6d345c47a8f6d94592b632aefca108922b9d84e"} Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.653019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"51ff80ae-27ca-4914-8c80-008f6d2d0860","Type":"ContainerStarted","Data":"96e95f7546b26a732a49a5e63566843ba0103cf97b5797685fb1a4702ee1cff6"} Dec 27 06:04:13 crc kubenswrapper[4760]: I1227 06:04:13.903823 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.185968 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.187524 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.189430 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.189670 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-tx89c" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.189732 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.193430 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.199708 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.335908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4616666b-d673-405c-ac8c-103aead8fd65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4616666b-d673-405c-ac8c-103aead8fd65\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afbca79-d46f-48e9-82f8-2f676c4c7960-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afbca79-d46f-48e9-82f8-2f676c4c7960-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336185 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336472 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afbca79-d46f-48e9-82f8-2f676c4c7960-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336605 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8l2\" (UniqueName: \"kubernetes.io/projected/5afbca79-d46f-48e9-82f8-2f676c4c7960-kube-api-access-ng8l2\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.336629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439244 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afbca79-d46f-48e9-82f8-2f676c4c7960-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8l2\" (UniqueName: \"kubernetes.io/projected/5afbca79-d46f-48e9-82f8-2f676c4c7960-kube-api-access-ng8l2\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439350 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4616666b-d673-405c-ac8c-103aead8fd65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4616666b-d673-405c-ac8c-103aead8fd65\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afbca79-d46f-48e9-82f8-2f676c4c7960-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.439413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afbca79-d46f-48e9-82f8-2f676c4c7960-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.440425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.440530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afbca79-d46f-48e9-82f8-2f676c4c7960-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.442857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.444939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afbca79-d46f-48e9-82f8-2f676c4c7960-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.445393 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.445457 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4616666b-d673-405c-ac8c-103aead8fd65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4616666b-d673-405c-ac8c-103aead8fd65\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0fc3d9c0dd5bc0995a2c0ac95be45bbd19eec60d3840d073ed5cbe6eefefacd3/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.451009 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afbca79-d46f-48e9-82f8-2f676c4c7960-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.452407 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afbca79-d46f-48e9-82f8-2f676c4c7960-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.473242 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8l2\" (UniqueName: \"kubernetes.io/projected/5afbca79-d46f-48e9-82f8-2f676c4c7960-kube-api-access-ng8l2\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.497063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4616666b-d673-405c-ac8c-103aead8fd65\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4616666b-d673-405c-ac8c-103aead8fd65\") pod \"openstack-cell1-galera-0\" (UID: \"5afbca79-d46f-48e9-82f8-2f676c4c7960\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.506009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:14 crc kubenswrapper[4760]: I1227 06:04:14.663822 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"6fc09772-cca9-4ff6-88ae-b66171f0745f","Type":"ContainerStarted","Data":"1e8b658f0a5ae392adfcaec6bce6a0fb69979e8643559045fc031d2ff68cee0c"} Dec 27 06:04:15 crc kubenswrapper[4760]: I1227 06:04:15.020743 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Dec 27 06:04:15 crc kubenswrapper[4760]: W1227 06:04:15.029984 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5afbca79_d46f_48e9_82f8_2f676c4c7960.slice/crio-3563c27adc2d852b9a05015985e8b1b33de44661862ee0a42c1d45a8b574b014 WatchSource:0}: Error finding container 3563c27adc2d852b9a05015985e8b1b33de44661862ee0a42c1d45a8b574b014: Status 404 returned error can't find the container with id 3563c27adc2d852b9a05015985e8b1b33de44661862ee0a42c1d45a8b574b014 Dec 27 06:04:15 crc kubenswrapper[4760]: I1227 06:04:15.679318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"5afbca79-d46f-48e9-82f8-2f676c4c7960","Type":"ContainerStarted","Data":"3563c27adc2d852b9a05015985e8b1b33de44661862ee0a42c1d45a8b574b014"} Dec 27 06:04:30 crc kubenswrapper[4760]: E1227 06:04:30.774236 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 27 06:04:30 crc kubenswrapper[4760]: E1227 06:04:30.774983 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5c7h58dh97h59h66hb4h57bh55dh59h688h64dh598h589h6h59ch54dh55ch64bhb7h68ch5dch9dh67h5d8h5b5h685h5bch669h648hd4h54h546q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmw9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_nova-kuttl-default(6fc09772-cca9-4ff6-88ae-b66171f0745f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:04:30 crc kubenswrapper[4760]: E1227 06:04:30.776282 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/memcached-0" podUID="6fc09772-cca9-4ff6-88ae-b66171f0745f" Dec 27 06:04:30 crc kubenswrapper[4760]: E1227 06:04:30.813667 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="nova-kuttl-default/memcached-0" podUID="6fc09772-cca9-4ff6-88ae-b66171f0745f" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.787381 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.787624 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmbq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_nova-kuttl-default(51ff80ae-27ca-4914-8c80-008f6d2d0860): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.788797 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="51ff80ae-27ca-4914-8c80-008f6d2d0860" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.799232 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.799473 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4tr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_nova-kuttl-default(88c1643d-b9a1-49ac-aff2-39bff3918b3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.800682 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-server-0" podUID="88c1643d-b9a1-49ac-aff2-39bff3918b3e" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.806619 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.806763 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98pwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-broadcaster-server-0_nova-kuttl-default(9c61bc46-c657-4109-ae26-5f2d02fcce40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.807966 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="9c61bc46-c657-4109-ae26-5f2d02fcce40" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.819221 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podUID="9c61bc46-c657-4109-ae26-5f2d02fcce40" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.819291 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="nova-kuttl-default/rabbitmq-server-0" podUID="88c1643d-b9a1-49ac-aff2-39bff3918b3e" Dec 27 06:04:31 crc kubenswrapper[4760]: E1227 06:04:31.819698 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podUID="51ff80ae-27ca-4914-8c80-008f6d2d0860" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.736755 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.737654 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ng8l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_nova-kuttl-default(5afbca79-d46f-48e9-82f8-2f676c4c7960): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.738975 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="5afbca79-d46f-48e9-82f8-2f676c4c7960" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.761709 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.761897 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8z9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_nova-kuttl-default(ed93a46f-1df6-4144-8487-08764749423a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.763051 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-galera-0" podUID="ed93a46f-1df6-4144-8487-08764749423a" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.839387 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="5afbca79-d46f-48e9-82f8-2f676c4c7960" Dec 27 06:04:33 crc kubenswrapper[4760]: E1227 06:04:33.839960 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-galera-0" podUID="ed93a46f-1df6-4144-8487-08764749423a" Dec 27 06:04:35 crc kubenswrapper[4760]: I1227 06:04:35.288385 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:04:35 crc kubenswrapper[4760]: I1227 06:04:35.288967 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:04:42 crc kubenswrapper[4760]: I1227 06:04:42.920174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"6fc09772-cca9-4ff6-88ae-b66171f0745f","Type":"ContainerStarted","Data":"327f4a7500456073ecc28a056d4959a3f719b8d200622e6b8e972f7283616fd7"} Dec 27 06:04:42 crc kubenswrapper[4760]: I1227 06:04:42.921545 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Dec 27 06:04:42 crc kubenswrapper[4760]: I1227 06:04:42.940562 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.935962159 podStartE2EDuration="29.94054341s" podCreationTimestamp="2025-12-27 06:04:13 +0000 UTC" firstStartedPulling="2025-12-27 06:04:13.907217648 +0000 UTC m=+1176.667287003" lastFinishedPulling="2025-12-27 06:04:41.911798929 +0000 UTC m=+1204.671868254" observedRunningTime="2025-12-27 06:04:42.937599038 +0000 UTC m=+1205.697668353" watchObservedRunningTime="2025-12-27 06:04:42.94054341 +0000 UTC m=+1205.700612725" Dec 27 06:04:45 crc kubenswrapper[4760]: I1227 06:04:45.948804 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"9c61bc46-c657-4109-ae26-5f2d02fcce40","Type":"ContainerStarted","Data":"6fe56b7bae758173e2ef9e6b1b838428b98f39d0dd8d2318e8ef00d72d06c6dd"} Dec 27 06:04:45 crc kubenswrapper[4760]: I1227 06:04:45.950590 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"51ff80ae-27ca-4914-8c80-008f6d2d0860","Type":"ContainerStarted","Data":"fd370778045643f4a3b67c4bf7eede402e8f207583537b3837aed0a386f1464e"} Dec 27 06:04:48 crc kubenswrapper[4760]: I1227 06:04:48.416591 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Dec 27 06:04:48 crc kubenswrapper[4760]: I1227 06:04:48.995763 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"5afbca79-d46f-48e9-82f8-2f676c4c7960","Type":"ContainerStarted","Data":"42dd1817f6aa1d0876cd8aeced5acdcb68b13136b02aa6dab544588bd88e2733"} Dec 27 06:04:49 crc kubenswrapper[4760]: I1227 06:04:49.000838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"88c1643d-b9a1-49ac-aff2-39bff3918b3e","Type":"ContainerStarted","Data":"5569383d07794709422b8e8b0a228e6e97c556503b6aeebf98868462a2b1761f"} Dec 27 06:04:49 crc kubenswrapper[4760]: I1227 06:04:49.002779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"ed93a46f-1df6-4144-8487-08764749423a","Type":"ContainerStarted","Data":"bf24548c43358edf464f9d23c0c4f242e0cf6eeb06f1ae133329e5659e531b65"} Dec 27 06:04:53 crc kubenswrapper[4760]: I1227 06:04:53.034401 4760 generic.go:334] "Generic (PLEG): container finished" podID="5afbca79-d46f-48e9-82f8-2f676c4c7960" containerID="42dd1817f6aa1d0876cd8aeced5acdcb68b13136b02aa6dab544588bd88e2733" exitCode=0 Dec 27 06:04:53 crc kubenswrapper[4760]: I1227 06:04:53.034475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"5afbca79-d46f-48e9-82f8-2f676c4c7960","Type":"ContainerDied","Data":"42dd1817f6aa1d0876cd8aeced5acdcb68b13136b02aa6dab544588bd88e2733"} Dec 27 06:04:53 crc kubenswrapper[4760]: I1227 06:04:53.044198 4760 generic.go:334] "Generic (PLEG): container finished" podID="ed93a46f-1df6-4144-8487-08764749423a" containerID="bf24548c43358edf464f9d23c0c4f242e0cf6eeb06f1ae133329e5659e531b65" exitCode=0 Dec 27 06:04:53 crc kubenswrapper[4760]: I1227 06:04:53.044243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"ed93a46f-1df6-4144-8487-08764749423a","Type":"ContainerDied","Data":"bf24548c43358edf464f9d23c0c4f242e0cf6eeb06f1ae133329e5659e531b65"} Dec 27 06:04:54 crc kubenswrapper[4760]: I1227 06:04:54.056036 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"5afbca79-d46f-48e9-82f8-2f676c4c7960","Type":"ContainerStarted","Data":"ec69ca7ec09e289a9c72d682b02272b0131018a6c6a010f268bb5578935edbdf"} Dec 27 06:04:54 crc kubenswrapper[4760]: I1227 06:04:54.058985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"ed93a46f-1df6-4144-8487-08764749423a","Type":"ContainerStarted","Data":"2b3cea890a8289241a2b6474f2dac620968dd03a20cd22b96ce95f2e27a8bfff"} Dec 27 06:04:54 crc kubenswrapper[4760]: I1227 06:04:54.091636 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=8.086320199 podStartE2EDuration="41.091612879s" podCreationTimestamp="2025-12-27 06:04:13 +0000 UTC" firstStartedPulling="2025-12-27 06:04:15.03429138 +0000 UTC m=+1177.794360695" lastFinishedPulling="2025-12-27 06:04:48.03958403 +0000 UTC m=+1210.799653375" observedRunningTime="2025-12-27 06:04:54.080061647 +0000 UTC m=+1216.840130972" watchObservedRunningTime="2025-12-27 06:04:54.091612879 +0000 UTC m=+1216.851682204" Dec 27 06:04:54 crc kubenswrapper[4760]: I1227 06:04:54.103077 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=9.153523106 podStartE2EDuration="44.103059069s" podCreationTimestamp="2025-12-27 06:04:10 +0000 UTC" firstStartedPulling="2025-12-27 06:04:13.046214455 +0000 UTC m=+1175.806283760" lastFinishedPulling="2025-12-27 06:04:47.995750378 +0000 UTC m=+1210.755819723" observedRunningTime="2025-12-27 06:04:54.102529396 +0000 UTC m=+1216.862598711" watchObservedRunningTime="2025-12-27 06:04:54.103059069 +0000 UTC m=+1216.863128384" Dec 27 06:04:54 crc kubenswrapper[4760]: I1227 06:04:54.507059 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:54 crc kubenswrapper[4760]: I1227 06:04:54.507131 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:58 crc kubenswrapper[4760]: I1227 06:04:58.619126 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:04:58 crc kubenswrapper[4760]: I1227 06:04:58.729058 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Dec 27 06:05:01 crc kubenswrapper[4760]: I1227 06:05:01.986012 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:05:01 crc kubenswrapper[4760]: I1227 06:05:01.986399 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.097148 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.198937 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.956928 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-60ab-account-create-update-pv4t6"] Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.958421 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.961076 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.978927 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-60ab-account-create-update-pv4t6"] Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.993508 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-zdmqk"] Dec 27 06:05:02 crc kubenswrapper[4760]: I1227 06:05:02.994768 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.008115 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-zdmqk"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.060836 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7phq\" (UniqueName: \"kubernetes.io/projected/cc9402b9-fedb-4d22-b889-92209fd2cf4b-kube-api-access-q7phq\") pod \"keystone-60ab-account-create-update-pv4t6\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.060904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc9402b9-fedb-4d22-b889-92209fd2cf4b-operator-scripts\") pod \"keystone-60ab-account-create-update-pv4t6\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.160582 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-zws8q"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.162486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5gx\" (UniqueName: \"kubernetes.io/projected/2efe4afd-0932-4485-9215-08b34620744e-kube-api-access-dk5gx\") pod \"keystone-db-create-zdmqk\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.162596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7phq\" (UniqueName: \"kubernetes.io/projected/cc9402b9-fedb-4d22-b889-92209fd2cf4b-kube-api-access-q7phq\") pod \"keystone-60ab-account-create-update-pv4t6\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.162627 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc9402b9-fedb-4d22-b889-92209fd2cf4b-operator-scripts\") pod \"keystone-60ab-account-create-update-pv4t6\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.162650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2efe4afd-0932-4485-9215-08b34620744e-operator-scripts\") pod \"keystone-db-create-zdmqk\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.163742 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc9402b9-fedb-4d22-b889-92209fd2cf4b-operator-scripts\") pod \"keystone-60ab-account-create-update-pv4t6\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.167401 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.174276 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.176686 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-zws8q"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.182727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7phq\" (UniqueName: \"kubernetes.io/projected/cc9402b9-fedb-4d22-b889-92209fd2cf4b-kube-api-access-q7phq\") pod \"keystone-60ab-account-create-update-pv4t6\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.263605 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670ac25-e45f-4dde-aa9e-3f390d27ad78-operator-scripts\") pod \"root-account-create-update-zws8q\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.263651 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2efe4afd-0932-4485-9215-08b34620744e-operator-scripts\") pod \"keystone-db-create-zdmqk\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.263763 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jq4\" (UniqueName: \"kubernetes.io/projected/e670ac25-e45f-4dde-aa9e-3f390d27ad78-kube-api-access-h5jq4\") pod \"root-account-create-update-zws8q\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.263817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5gx\" (UniqueName: \"kubernetes.io/projected/2efe4afd-0932-4485-9215-08b34620744e-kube-api-access-dk5gx\") pod \"keystone-db-create-zdmqk\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.264646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2efe4afd-0932-4485-9215-08b34620744e-operator-scripts\") pod \"keystone-db-create-zdmqk\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.278925 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-fxnxx"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.279902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.285318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-fxnxx"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.287597 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.293786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5gx\" (UniqueName: \"kubernetes.io/projected/2efe4afd-0932-4485-9215-08b34620744e-kube-api-access-dk5gx\") pod \"keystone-db-create-zdmqk\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.315864 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-86ae-account-create-update-9z6mb"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.319763 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.320306 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.323202 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.327085 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-86ae-account-create-update-9z6mb"] Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.365571 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670ac25-e45f-4dde-aa9e-3f390d27ad78-operator-scripts\") pod \"root-account-create-update-zws8q\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.365632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85a284e9-7716-4926-9cdd-fb2bab2edba2-operator-scripts\") pod \"placement-db-create-fxnxx\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.365714 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqs86\" (UniqueName: \"kubernetes.io/projected/85a284e9-7716-4926-9cdd-fb2bab2edba2-kube-api-access-pqs86\") pod \"placement-db-create-fxnxx\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.365843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jq4\" (UniqueName: \"kubernetes.io/projected/e670ac25-e45f-4dde-aa9e-3f390d27ad78-kube-api-access-h5jq4\") pod \"root-account-create-update-zws8q\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.366461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670ac25-e45f-4dde-aa9e-3f390d27ad78-operator-scripts\") pod \"root-account-create-update-zws8q\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.383927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jq4\" (UniqueName: \"kubernetes.io/projected/e670ac25-e45f-4dde-aa9e-3f390d27ad78-kube-api-access-h5jq4\") pod \"root-account-create-update-zws8q\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.467246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85a284e9-7716-4926-9cdd-fb2bab2edba2-operator-scripts\") pod \"placement-db-create-fxnxx\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.467594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4dn\" (UniqueName: \"kubernetes.io/projected/2522ef06-2d40-417c-9e4c-631cecbe0b25-kube-api-access-sg4dn\") pod \"placement-86ae-account-create-update-9z6mb\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.467624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2522ef06-2d40-417c-9e4c-631cecbe0b25-operator-scripts\") pod \"placement-86ae-account-create-update-9z6mb\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.467670 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqs86\" (UniqueName: \"kubernetes.io/projected/85a284e9-7716-4926-9cdd-fb2bab2edba2-kube-api-access-pqs86\") pod \"placement-db-create-fxnxx\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.468549 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85a284e9-7716-4926-9cdd-fb2bab2edba2-operator-scripts\") pod \"placement-db-create-fxnxx\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.484465 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqs86\" (UniqueName: \"kubernetes.io/projected/85a284e9-7716-4926-9cdd-fb2bab2edba2-kube-api-access-pqs86\") pod \"placement-db-create-fxnxx\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.569804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4dn\" (UniqueName: \"kubernetes.io/projected/2522ef06-2d40-417c-9e4c-631cecbe0b25-kube-api-access-sg4dn\") pod \"placement-86ae-account-create-update-9z6mb\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.569862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2522ef06-2d40-417c-9e4c-631cecbe0b25-operator-scripts\") pod \"placement-86ae-account-create-update-9z6mb\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.570787 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2522ef06-2d40-417c-9e4c-631cecbe0b25-operator-scripts\") pod \"placement-86ae-account-create-update-9z6mb\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.571134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.586476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4dn\" (UniqueName: \"kubernetes.io/projected/2522ef06-2d40-417c-9e4c-631cecbe0b25-kube-api-access-sg4dn\") pod \"placement-86ae-account-create-update-9z6mb\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.693492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.705742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.721194 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-60ab-account-create-update-pv4t6"] Dec 27 06:05:03 crc kubenswrapper[4760]: W1227 06:05:03.740457 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc9402b9_fedb_4d22_b889_92209fd2cf4b.slice/crio-85ae88b48d0d2378076ba1440d4310b8f52783df702afe89fec751b8c55718ea WatchSource:0}: Error finding container 85ae88b48d0d2378076ba1440d4310b8f52783df702afe89fec751b8c55718ea: Status 404 returned error can't find the container with id 85ae88b48d0d2378076ba1440d4310b8f52783df702afe89fec751b8c55718ea Dec 27 06:05:03 crc kubenswrapper[4760]: I1227 06:05:03.789211 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-zdmqk"] Dec 27 06:05:03 crc kubenswrapper[4760]: W1227 06:05:03.795179 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2efe4afd_0932_4485_9215_08b34620744e.slice/crio-13ff54ca1337f472715c11b1bb1baab02a0a2f9897c09493c890f97ed902e5d0 WatchSource:0}: Error finding container 13ff54ca1337f472715c11b1bb1baab02a0a2f9897c09493c890f97ed902e5d0: Status 404 returned error can't find the container with id 13ff54ca1337f472715c11b1bb1baab02a0a2f9897c09493c890f97ed902e5d0 Dec 27 06:05:04 crc kubenswrapper[4760]: I1227 06:05:04.033285 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-zws8q"] Dec 27 06:05:04 crc kubenswrapper[4760]: W1227 06:05:04.035608 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode670ac25_e45f_4dde_aa9e_3f390d27ad78.slice/crio-4e2516afd9513d759285789e1fbdc9b2ded1b1921951a8978e6c2ebcc805a107 WatchSource:0}: Error finding container 4e2516afd9513d759285789e1fbdc9b2ded1b1921951a8978e6c2ebcc805a107: Status 404 returned error can't find the container with id 4e2516afd9513d759285789e1fbdc9b2ded1b1921951a8978e6c2ebcc805a107 Dec 27 06:05:04 crc kubenswrapper[4760]: I1227 06:05:04.130960 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-zws8q" event={"ID":"e670ac25-e45f-4dde-aa9e-3f390d27ad78","Type":"ContainerStarted","Data":"4e2516afd9513d759285789e1fbdc9b2ded1b1921951a8978e6c2ebcc805a107"} Dec 27 06:05:04 crc kubenswrapper[4760]: I1227 06:05:04.132790 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-zdmqk" event={"ID":"2efe4afd-0932-4485-9215-08b34620744e","Type":"ContainerStarted","Data":"13ff54ca1337f472715c11b1bb1baab02a0a2f9897c09493c890f97ed902e5d0"} Dec 27 06:05:04 crc kubenswrapper[4760]: I1227 06:05:04.134645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" event={"ID":"cc9402b9-fedb-4d22-b889-92209fd2cf4b","Type":"ContainerStarted","Data":"85ae88b48d0d2378076ba1440d4310b8f52783df702afe89fec751b8c55718ea"} Dec 27 06:05:04 crc kubenswrapper[4760]: I1227 06:05:04.182303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-86ae-account-create-update-9z6mb"] Dec 27 06:05:04 crc kubenswrapper[4760]: I1227 06:05:04.248083 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-fxnxx"] Dec 27 06:05:04 crc kubenswrapper[4760]: W1227 06:05:04.249012 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85a284e9_7716_4926_9cdd_fb2bab2edba2.slice/crio-cdb13cfbb9c338bc5c2defbcd7d0f1c1aa094c2f570565cac182aa632ae07341 WatchSource:0}: Error finding container cdb13cfbb9c338bc5c2defbcd7d0f1c1aa094c2f570565cac182aa632ae07341: Status 404 returned error can't find the container with id cdb13cfbb9c338bc5c2defbcd7d0f1c1aa094c2f570565cac182aa632ae07341 Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.143979 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" event={"ID":"2522ef06-2d40-417c-9e4c-631cecbe0b25","Type":"ContainerStarted","Data":"a945956c40d5f8a838a532f357a0430cbca8d397b8d4b97bfd8c14ec90f15219"} Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.145453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-fxnxx" event={"ID":"85a284e9-7716-4926-9cdd-fb2bab2edba2","Type":"ContainerStarted","Data":"cdb13cfbb9c338bc5c2defbcd7d0f1c1aa094c2f570565cac182aa632ae07341"} Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.288281 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.288365 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.288433 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.289353 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c94525c7f3f2fe0472a9dc16fbdb91fa7f9227d81da63569e521eb20968b308"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 06:05:05 crc kubenswrapper[4760]: I1227 06:05:05.289477 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://3c94525c7f3f2fe0472a9dc16fbdb91fa7f9227d81da63569e521eb20968b308" gracePeriod=600 Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.170146 4760 generic.go:334] "Generic (PLEG): container finished" podID="e670ac25-e45f-4dde-aa9e-3f390d27ad78" containerID="b9e2be0ab7f8e9aca44766102a11a287fc3c579767dd4530d7b2fd86abad3223" exitCode=0 Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.170261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-zws8q" event={"ID":"e670ac25-e45f-4dde-aa9e-3f390d27ad78","Type":"ContainerDied","Data":"b9e2be0ab7f8e9aca44766102a11a287fc3c579767dd4530d7b2fd86abad3223"} Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.171915 4760 generic.go:334] "Generic (PLEG): container finished" podID="2efe4afd-0932-4485-9215-08b34620744e" containerID="0824ae8706d79f4b410bb8a4235876ab8444e5545529f87ee82d738f501bb12d" exitCode=0 Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.171974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-zdmqk" event={"ID":"2efe4afd-0932-4485-9215-08b34620744e","Type":"ContainerDied","Data":"0824ae8706d79f4b410bb8a4235876ab8444e5545529f87ee82d738f501bb12d"} Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.174393 4760 generic.go:334] "Generic (PLEG): container finished" podID="85a284e9-7716-4926-9cdd-fb2bab2edba2" containerID="09078a8635bc15b4d7fbcaab150efa7d3a2c9b655ac5adb62017320fda2a488c" exitCode=0 Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.174571 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-fxnxx" event={"ID":"85a284e9-7716-4926-9cdd-fb2bab2edba2","Type":"ContainerDied","Data":"09078a8635bc15b4d7fbcaab150efa7d3a2c9b655ac5adb62017320fda2a488c"} Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.176005 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" event={"ID":"cc9402b9-fedb-4d22-b889-92209fd2cf4b","Type":"ContainerStarted","Data":"b0c82b247eff07aaabb3b6db2034a7bd43212cbf4d38eeb5cc6aa951b6971081"} Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.178025 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="3c94525c7f3f2fe0472a9dc16fbdb91fa7f9227d81da63569e521eb20968b308" exitCode=0 Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.178077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"3c94525c7f3f2fe0472a9dc16fbdb91fa7f9227d81da63569e521eb20968b308"} Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.178119 4760 scope.go:117] "RemoveContainer" containerID="250ddace2814c860fabbd9e2871a90feb8c2c9cfa534ed4a183058d24b4ec76d" Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.179595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" event={"ID":"2522ef06-2d40-417c-9e4c-631cecbe0b25","Type":"ContainerStarted","Data":"10d472f3947bb722c4f7116e0bab55693d9dd61ef581ce42daa938223d102c0b"} Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.224482 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" podStartSLOduration=6.224455139 podStartE2EDuration="6.224455139s" podCreationTimestamp="2025-12-27 06:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:05:08.213339828 +0000 UTC m=+1230.973409143" watchObservedRunningTime="2025-12-27 06:05:08.224455139 +0000 UTC m=+1230.984524474" Dec 27 06:05:08 crc kubenswrapper[4760]: I1227 06:05:08.255506 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" podStartSLOduration=5.255490097 podStartE2EDuration="5.255490097s" podCreationTimestamp="2025-12-27 06:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:05:08.245655637 +0000 UTC m=+1231.005724962" watchObservedRunningTime="2025-12-27 06:05:08.255490097 +0000 UTC m=+1231.015559412" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.188300 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"560a955296849a499da3938ec04851b01ba39103dd145de7716581a5c91fbb44"} Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.190030 4760 generic.go:334] "Generic (PLEG): container finished" podID="2522ef06-2d40-417c-9e4c-631cecbe0b25" containerID="10d472f3947bb722c4f7116e0bab55693d9dd61ef581ce42daa938223d102c0b" exitCode=0 Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.190167 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" event={"ID":"2522ef06-2d40-417c-9e4c-631cecbe0b25","Type":"ContainerDied","Data":"10d472f3947bb722c4f7116e0bab55693d9dd61ef581ce42daa938223d102c0b"} Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.192457 4760 generic.go:334] "Generic (PLEG): container finished" podID="cc9402b9-fedb-4d22-b889-92209fd2cf4b" containerID="b0c82b247eff07aaabb3b6db2034a7bd43212cbf4d38eeb5cc6aa951b6971081" exitCode=0 Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.192513 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" event={"ID":"cc9402b9-fedb-4d22-b889-92209fd2cf4b","Type":"ContainerDied","Data":"b0c82b247eff07aaabb3b6db2034a7bd43212cbf4d38eeb5cc6aa951b6971081"} Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.600801 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.669638 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.674636 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2efe4afd-0932-4485-9215-08b34620744e-operator-scripts\") pod \"2efe4afd-0932-4485-9215-08b34620744e\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.674842 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5gx\" (UniqueName: \"kubernetes.io/projected/2efe4afd-0932-4485-9215-08b34620744e-kube-api-access-dk5gx\") pod \"2efe4afd-0932-4485-9215-08b34620744e\" (UID: \"2efe4afd-0932-4485-9215-08b34620744e\") " Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.675288 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2efe4afd-0932-4485-9215-08b34620744e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2efe4afd-0932-4485-9215-08b34620744e" (UID: "2efe4afd-0932-4485-9215-08b34620744e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.675634 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2efe4afd-0932-4485-9215-08b34620744e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.675672 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.696063 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2efe4afd-0932-4485-9215-08b34620744e-kube-api-access-dk5gx" (OuterVolumeSpecName: "kube-api-access-dk5gx") pod "2efe4afd-0932-4485-9215-08b34620744e" (UID: "2efe4afd-0932-4485-9215-08b34620744e"). InnerVolumeSpecName "kube-api-access-dk5gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.776893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5jq4\" (UniqueName: \"kubernetes.io/projected/e670ac25-e45f-4dde-aa9e-3f390d27ad78-kube-api-access-h5jq4\") pod \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.777086 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670ac25-e45f-4dde-aa9e-3f390d27ad78-operator-scripts\") pod \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\" (UID: \"e670ac25-e45f-4dde-aa9e-3f390d27ad78\") " Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.777146 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85a284e9-7716-4926-9cdd-fb2bab2edba2-operator-scripts\") pod \"85a284e9-7716-4926-9cdd-fb2bab2edba2\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.777177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqs86\" (UniqueName: \"kubernetes.io/projected/85a284e9-7716-4926-9cdd-fb2bab2edba2-kube-api-access-pqs86\") pod \"85a284e9-7716-4926-9cdd-fb2bab2edba2\" (UID: \"85a284e9-7716-4926-9cdd-fb2bab2edba2\") " Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.777554 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5gx\" (UniqueName: \"kubernetes.io/projected/2efe4afd-0932-4485-9215-08b34620744e-kube-api-access-dk5gx\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.777749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e670ac25-e45f-4dde-aa9e-3f390d27ad78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e670ac25-e45f-4dde-aa9e-3f390d27ad78" (UID: "e670ac25-e45f-4dde-aa9e-3f390d27ad78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.777793 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a284e9-7716-4926-9cdd-fb2bab2edba2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85a284e9-7716-4926-9cdd-fb2bab2edba2" (UID: "85a284e9-7716-4926-9cdd-fb2bab2edba2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.779878 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e670ac25-e45f-4dde-aa9e-3f390d27ad78-kube-api-access-h5jq4" (OuterVolumeSpecName: "kube-api-access-h5jq4") pod "e670ac25-e45f-4dde-aa9e-3f390d27ad78" (UID: "e670ac25-e45f-4dde-aa9e-3f390d27ad78"). InnerVolumeSpecName "kube-api-access-h5jq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.780214 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a284e9-7716-4926-9cdd-fb2bab2edba2-kube-api-access-pqs86" (OuterVolumeSpecName: "kube-api-access-pqs86") pod "85a284e9-7716-4926-9cdd-fb2bab2edba2" (UID: "85a284e9-7716-4926-9cdd-fb2bab2edba2"). InnerVolumeSpecName "kube-api-access-pqs86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.879596 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e670ac25-e45f-4dde-aa9e-3f390d27ad78-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.879878 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85a284e9-7716-4926-9cdd-fb2bab2edba2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.879892 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqs86\" (UniqueName: \"kubernetes.io/projected/85a284e9-7716-4926-9cdd-fb2bab2edba2-kube-api-access-pqs86\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:09 crc kubenswrapper[4760]: I1227 06:05:09.879907 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5jq4\" (UniqueName: \"kubernetes.io/projected/e670ac25-e45f-4dde-aa9e-3f390d27ad78-kube-api-access-h5jq4\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.205627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-fxnxx" event={"ID":"85a284e9-7716-4926-9cdd-fb2bab2edba2","Type":"ContainerDied","Data":"cdb13cfbb9c338bc5c2defbcd7d0f1c1aa094c2f570565cac182aa632ae07341"} Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.205703 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb13cfbb9c338bc5c2defbcd7d0f1c1aa094c2f570565cac182aa632ae07341" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.206343 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-fxnxx" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.208327 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-zws8q" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.208328 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-zws8q" event={"ID":"e670ac25-e45f-4dde-aa9e-3f390d27ad78","Type":"ContainerDied","Data":"4e2516afd9513d759285789e1fbdc9b2ded1b1921951a8978e6c2ebcc805a107"} Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.208530 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e2516afd9513d759285789e1fbdc9b2ded1b1921951a8978e6c2ebcc805a107" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.210874 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-zdmqk" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.210888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-zdmqk" event={"ID":"2efe4afd-0932-4485-9215-08b34620744e","Type":"ContainerDied","Data":"13ff54ca1337f472715c11b1bb1baab02a0a2f9897c09493c890f97ed902e5d0"} Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.211469 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13ff54ca1337f472715c11b1bb1baab02a0a2f9897c09493c890f97ed902e5d0" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.493004 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.576993 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.593033 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc9402b9-fedb-4d22-b889-92209fd2cf4b-operator-scripts\") pod \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.593149 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7phq\" (UniqueName: \"kubernetes.io/projected/cc9402b9-fedb-4d22-b889-92209fd2cf4b-kube-api-access-q7phq\") pod \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\" (UID: \"cc9402b9-fedb-4d22-b889-92209fd2cf4b\") " Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.594612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9402b9-fedb-4d22-b889-92209fd2cf4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc9402b9-fedb-4d22-b889-92209fd2cf4b" (UID: "cc9402b9-fedb-4d22-b889-92209fd2cf4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.598733 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9402b9-fedb-4d22-b889-92209fd2cf4b-kube-api-access-q7phq" (OuterVolumeSpecName: "kube-api-access-q7phq") pod "cc9402b9-fedb-4d22-b889-92209fd2cf4b" (UID: "cc9402b9-fedb-4d22-b889-92209fd2cf4b"). InnerVolumeSpecName "kube-api-access-q7phq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.694482 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2522ef06-2d40-417c-9e4c-631cecbe0b25-operator-scripts\") pod \"2522ef06-2d40-417c-9e4c-631cecbe0b25\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.694815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4dn\" (UniqueName: \"kubernetes.io/projected/2522ef06-2d40-417c-9e4c-631cecbe0b25-kube-api-access-sg4dn\") pod \"2522ef06-2d40-417c-9e4c-631cecbe0b25\" (UID: \"2522ef06-2d40-417c-9e4c-631cecbe0b25\") " Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.695491 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc9402b9-fedb-4d22-b889-92209fd2cf4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.695446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2522ef06-2d40-417c-9e4c-631cecbe0b25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2522ef06-2d40-417c-9e4c-631cecbe0b25" (UID: "2522ef06-2d40-417c-9e4c-631cecbe0b25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.695515 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7phq\" (UniqueName: \"kubernetes.io/projected/cc9402b9-fedb-4d22-b889-92209fd2cf4b-kube-api-access-q7phq\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.698379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2522ef06-2d40-417c-9e4c-631cecbe0b25-kube-api-access-sg4dn" (OuterVolumeSpecName: "kube-api-access-sg4dn") pod "2522ef06-2d40-417c-9e4c-631cecbe0b25" (UID: "2522ef06-2d40-417c-9e4c-631cecbe0b25"). InnerVolumeSpecName "kube-api-access-sg4dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.796995 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2522ef06-2d40-417c-9e4c-631cecbe0b25-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:10 crc kubenswrapper[4760]: I1227 06:05:10.797035 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg4dn\" (UniqueName: \"kubernetes.io/projected/2522ef06-2d40-417c-9e4c-631cecbe0b25-kube-api-access-sg4dn\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:11 crc kubenswrapper[4760]: I1227 06:05:11.221209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" event={"ID":"2522ef06-2d40-417c-9e4c-631cecbe0b25","Type":"ContainerDied","Data":"a945956c40d5f8a838a532f357a0430cbca8d397b8d4b97bfd8c14ec90f15219"} Dec 27 06:05:11 crc kubenswrapper[4760]: I1227 06:05:11.221531 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a945956c40d5f8a838a532f357a0430cbca8d397b8d4b97bfd8c14ec90f15219" Dec 27 06:05:11 crc kubenswrapper[4760]: I1227 06:05:11.221295 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-86ae-account-create-update-9z6mb" Dec 27 06:05:11 crc kubenswrapper[4760]: I1227 06:05:11.222682 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" event={"ID":"cc9402b9-fedb-4d22-b889-92209fd2cf4b","Type":"ContainerDied","Data":"85ae88b48d0d2378076ba1440d4310b8f52783df702afe89fec751b8c55718ea"} Dec 27 06:05:11 crc kubenswrapper[4760]: I1227 06:05:11.222718 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ae88b48d0d2378076ba1440d4310b8f52783df702afe89fec751b8c55718ea" Dec 27 06:05:11 crc kubenswrapper[4760]: I1227 06:05:11.222776 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-60ab-account-create-update-pv4t6" Dec 27 06:05:15 crc kubenswrapper[4760]: I1227 06:05:15.648100 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-zws8q"] Dec 27 06:05:15 crc kubenswrapper[4760]: I1227 06:05:15.655047 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-zws8q"] Dec 27 06:05:17 crc kubenswrapper[4760]: I1227 06:05:17.518538 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e670ac25-e45f-4dde-aa9e-3f390d27ad78" path="/var/lib/kubelet/pods/e670ac25-e45f-4dde-aa9e-3f390d27ad78/volumes" Dec 27 06:05:18 crc kubenswrapper[4760]: I1227 06:05:18.286135 4760 generic.go:334] "Generic (PLEG): container finished" podID="9c61bc46-c657-4109-ae26-5f2d02fcce40" containerID="6fe56b7bae758173e2ef9e6b1b838428b98f39d0dd8d2318e8ef00d72d06c6dd" exitCode=0 Dec 27 06:05:18 crc kubenswrapper[4760]: I1227 06:05:18.286200 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"9c61bc46-c657-4109-ae26-5f2d02fcce40","Type":"ContainerDied","Data":"6fe56b7bae758173e2ef9e6b1b838428b98f39d0dd8d2318e8ef00d72d06c6dd"} Dec 27 06:05:19 crc kubenswrapper[4760]: I1227 06:05:19.295617 4760 generic.go:334] "Generic (PLEG): container finished" podID="51ff80ae-27ca-4914-8c80-008f6d2d0860" containerID="fd370778045643f4a3b67c4bf7eede402e8f207583537b3837aed0a386f1464e" exitCode=0 Dec 27 06:05:19 crc kubenswrapper[4760]: I1227 06:05:19.295701 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"51ff80ae-27ca-4914-8c80-008f6d2d0860","Type":"ContainerDied","Data":"fd370778045643f4a3b67c4bf7eede402e8f207583537b3837aed0a386f1464e"} Dec 27 06:05:19 crc kubenswrapper[4760]: I1227 06:05:19.301971 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"9c61bc46-c657-4109-ae26-5f2d02fcce40","Type":"ContainerStarted","Data":"f87322a6e253514ed010b1b26dc9d50cc78d4d148cc9428500e5a7758008577d"} Dec 27 06:05:19 crc kubenswrapper[4760]: I1227 06:05:19.302877 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:05:19 crc kubenswrapper[4760]: I1227 06:05:19.352164 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=39.326270032 podStartE2EDuration="1m10.352142678s" podCreationTimestamp="2025-12-27 06:04:09 +0000 UTC" firstStartedPulling="2025-12-27 06:04:12.957452807 +0000 UTC m=+1175.717522122" lastFinishedPulling="2025-12-27 06:04:43.983325453 +0000 UTC m=+1206.743394768" observedRunningTime="2025-12-27 06:05:19.347158016 +0000 UTC m=+1242.107227361" watchObservedRunningTime="2025-12-27 06:05:19.352142678 +0000 UTC m=+1242.112212004" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.310414 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"51ff80ae-27ca-4914-8c80-008f6d2d0860","Type":"ContainerStarted","Data":"040fe0d241facd45a35c92f00d6fab441ad1d60ceac65d1d098725452cc64cac"} Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.310883 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.340702 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=-9223371965.514091 podStartE2EDuration="1m11.340684617s" podCreationTimestamp="2025-12-27 06:04:09 +0000 UTC" firstStartedPulling="2025-12-27 06:04:13.073310197 +0000 UTC m=+1175.833379512" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:05:20.333000118 +0000 UTC m=+1243.093069443" watchObservedRunningTime="2025-12-27 06:05:20.340684617 +0000 UTC m=+1243.100753942" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.666537 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-7r7jq"] Dec 27 06:05:20 crc kubenswrapper[4760]: E1227 06:05:20.667010 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2efe4afd-0932-4485-9215-08b34620744e" containerName="mariadb-database-create" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667041 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2efe4afd-0932-4485-9215-08b34620744e" containerName="mariadb-database-create" Dec 27 06:05:20 crc kubenswrapper[4760]: E1227 06:05:20.667075 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9402b9-fedb-4d22-b889-92209fd2cf4b" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667263 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9402b9-fedb-4d22-b889-92209fd2cf4b" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: E1227 06:05:20.667357 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2522ef06-2d40-417c-9e4c-631cecbe0b25" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667375 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2522ef06-2d40-417c-9e4c-631cecbe0b25" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: E1227 06:05:20.667393 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e670ac25-e45f-4dde-aa9e-3f390d27ad78" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667401 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e670ac25-e45f-4dde-aa9e-3f390d27ad78" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: E1227 06:05:20.667419 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a284e9-7716-4926-9cdd-fb2bab2edba2" containerName="mariadb-database-create" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667428 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a284e9-7716-4926-9cdd-fb2bab2edba2" containerName="mariadb-database-create" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667770 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e670ac25-e45f-4dde-aa9e-3f390d27ad78" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667782 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2522ef06-2d40-417c-9e4c-631cecbe0b25" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667797 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9402b9-fedb-4d22-b889-92209fd2cf4b" containerName="mariadb-account-create-update" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667814 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a284e9-7716-4926-9cdd-fb2bab2edba2" containerName="mariadb-database-create" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.667827 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2efe4afd-0932-4485-9215-08b34620744e" containerName="mariadb-database-create" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.668457 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.672997 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.710463 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-7r7jq"] Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.758290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fkrf\" (UniqueName: \"kubernetes.io/projected/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-kube-api-access-8fkrf\") pod \"root-account-create-update-7r7jq\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.758395 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-operator-scripts\") pod \"root-account-create-update-7r7jq\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.859691 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fkrf\" (UniqueName: \"kubernetes.io/projected/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-kube-api-access-8fkrf\") pod \"root-account-create-update-7r7jq\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.859809 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-operator-scripts\") pod \"root-account-create-update-7r7jq\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.861432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-operator-scripts\") pod \"root-account-create-update-7r7jq\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:20 crc kubenswrapper[4760]: I1227 06:05:20.896225 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fkrf\" (UniqueName: \"kubernetes.io/projected/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-kube-api-access-8fkrf\") pod \"root-account-create-update-7r7jq\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:21 crc kubenswrapper[4760]: I1227 06:05:21.004527 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:21 crc kubenswrapper[4760]: I1227 06:05:21.323069 4760 generic.go:334] "Generic (PLEG): container finished" podID="88c1643d-b9a1-49ac-aff2-39bff3918b3e" containerID="5569383d07794709422b8e8b0a228e6e97c556503b6aeebf98868462a2b1761f" exitCode=0 Dec 27 06:05:21 crc kubenswrapper[4760]: I1227 06:05:21.323151 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"88c1643d-b9a1-49ac-aff2-39bff3918b3e","Type":"ContainerDied","Data":"5569383d07794709422b8e8b0a228e6e97c556503b6aeebf98868462a2b1761f"} Dec 27 06:05:21 crc kubenswrapper[4760]: I1227 06:05:21.488440 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-7r7jq"] Dec 27 06:05:22 crc kubenswrapper[4760]: I1227 06:05:22.334522 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"88c1643d-b9a1-49ac-aff2-39bff3918b3e","Type":"ContainerStarted","Data":"33a84d552d8a496344ed2cf095122b1aad437bc02dc21b756e9f3b5e6d77d50e"} Dec 27 06:05:22 crc kubenswrapper[4760]: I1227 06:05:22.335175 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:05:22 crc kubenswrapper[4760]: I1227 06:05:22.336963 4760 generic.go:334] "Generic (PLEG): container finished" podID="3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" containerID="7aeddfae2b053160c986a148aa8cbde37e8f69dc02d1f5e1c6a57e0990d0f234" exitCode=0 Dec 27 06:05:22 crc kubenswrapper[4760]: I1227 06:05:22.337049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-7r7jq" event={"ID":"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af","Type":"ContainerDied","Data":"7aeddfae2b053160c986a148aa8cbde37e8f69dc02d1f5e1c6a57e0990d0f234"} Dec 27 06:05:22 crc kubenswrapper[4760]: I1227 06:05:22.337307 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-7r7jq" event={"ID":"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af","Type":"ContainerStarted","Data":"90a9049b9fa9b63a58ba6f93a6c9e80dd3ca0fc8817cbfab4dcb9ca04a0c3f9d"} Dec 27 06:05:22 crc kubenswrapper[4760]: I1227 06:05:22.379900 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=-9223371963.4749 podStartE2EDuration="1m13.379876279s" podCreationTimestamp="2025-12-27 06:04:09 +0000 UTC" firstStartedPulling="2025-12-27 06:04:12.577221158 +0000 UTC m=+1175.337290473" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:05:22.372249904 +0000 UTC m=+1245.132319249" watchObservedRunningTime="2025-12-27 06:05:22.379876279 +0000 UTC m=+1245.139945634" Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.595960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.705320 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fkrf\" (UniqueName: \"kubernetes.io/projected/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-kube-api-access-8fkrf\") pod \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.705822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-operator-scripts\") pod \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\" (UID: \"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af\") " Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.706572 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" (UID: "3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.713266 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-kube-api-access-8fkrf" (OuterVolumeSpecName: "kube-api-access-8fkrf") pod "3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" (UID: "3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af"). InnerVolumeSpecName "kube-api-access-8fkrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.807920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fkrf\" (UniqueName: \"kubernetes.io/projected/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-kube-api-access-8fkrf\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:23 crc kubenswrapper[4760]: I1227 06:05:23.807955 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:24 crc kubenswrapper[4760]: I1227 06:05:24.354796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-7r7jq" event={"ID":"3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af","Type":"ContainerDied","Data":"90a9049b9fa9b63a58ba6f93a6c9e80dd3ca0fc8817cbfab4dcb9ca04a0c3f9d"} Dec 27 06:05:24 crc kubenswrapper[4760]: I1227 06:05:24.355358 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a9049b9fa9b63a58ba6f93a6c9e80dd3ca0fc8817cbfab4dcb9ca04a0c3f9d" Dec 27 06:05:24 crc kubenswrapper[4760]: I1227 06:05:24.354818 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-7r7jq" Dec 27 06:05:31 crc kubenswrapper[4760]: I1227 06:05:31.247132 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Dec 27 06:05:32 crc kubenswrapper[4760]: I1227 06:05:32.464268 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Dec 27 06:05:33 crc kubenswrapper[4760]: E1227 06:05:33.240754 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.192:36368->38.102.83.192:43791: write tcp 38.102.83.192:36368->38.102.83.192:43791: write: broken pipe Dec 27 06:05:40 crc kubenswrapper[4760]: I1227 06:05:40.676581 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.326304 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-2t87g"] Dec 27 06:05:41 crc kubenswrapper[4760]: E1227 06:05:41.326796 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" containerName="mariadb-account-create-update" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.326827 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" containerName="mariadb-account-create-update" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.327335 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" containerName="mariadb-account-create-update" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.328257 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.331441 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.331942 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.332116 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.342671 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-chrfw" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.346152 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2t87g"] Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.395113 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwx9l\" (UniqueName: \"kubernetes.io/projected/6d335d87-9e3c-4826-bea2-ec9884fde6e0-kube-api-access-rwx9l\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.395199 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-config-data\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.395285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-combined-ca-bundle\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.496868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-combined-ca-bundle\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.497018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwx9l\" (UniqueName: \"kubernetes.io/projected/6d335d87-9e3c-4826-bea2-ec9884fde6e0-kube-api-access-rwx9l\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.497077 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-config-data\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.503839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-combined-ca-bundle\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.511280 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-config-data\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.523074 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwx9l\" (UniqueName: \"kubernetes.io/projected/6d335d87-9e3c-4826-bea2-ec9884fde6e0-kube-api-access-rwx9l\") pod \"keystone-db-sync-2t87g\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.651676 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:41 crc kubenswrapper[4760]: I1227 06:05:41.876876 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2t87g"] Dec 27 06:05:41 crc kubenswrapper[4760]: W1227 06:05:41.884159 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d335d87_9e3c_4826_bea2_ec9884fde6e0.slice/crio-48cd7ef817a574a9202ca47435121f7d5c250e6af72db9c455a2af3f5490ba59 WatchSource:0}: Error finding container 48cd7ef817a574a9202ca47435121f7d5c250e6af72db9c455a2af3f5490ba59: Status 404 returned error can't find the container with id 48cd7ef817a574a9202ca47435121f7d5c250e6af72db9c455a2af3f5490ba59 Dec 27 06:05:42 crc kubenswrapper[4760]: I1227 06:05:42.517376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2t87g" event={"ID":"6d335d87-9e3c-4826-bea2-ec9884fde6e0","Type":"ContainerStarted","Data":"48cd7ef817a574a9202ca47435121f7d5c250e6af72db9c455a2af3f5490ba59"} Dec 27 06:05:48 crc kubenswrapper[4760]: I1227 06:05:48.574610 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2t87g" event={"ID":"6d335d87-9e3c-4826-bea2-ec9884fde6e0","Type":"ContainerStarted","Data":"bad73730055928a71f66a5d77c61c785e71a4ee8586cb7d2ba91242c0dc02267"} Dec 27 06:05:48 crc kubenswrapper[4760]: I1227 06:05:48.610293 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-2t87g" podStartSLOduration=1.7329032359999998 podStartE2EDuration="7.61026423s" podCreationTimestamp="2025-12-27 06:05:41 +0000 UTC" firstStartedPulling="2025-12-27 06:05:41.887039984 +0000 UTC m=+1264.647109319" lastFinishedPulling="2025-12-27 06:05:47.764400968 +0000 UTC m=+1270.524470313" observedRunningTime="2025-12-27 06:05:48.59756944 +0000 UTC m=+1271.357638785" watchObservedRunningTime="2025-12-27 06:05:48.61026423 +0000 UTC m=+1271.370333585" Dec 27 06:05:57 crc kubenswrapper[4760]: I1227 06:05:57.662604 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d335d87-9e3c-4826-bea2-ec9884fde6e0" containerID="bad73730055928a71f66a5d77c61c785e71a4ee8586cb7d2ba91242c0dc02267" exitCode=0 Dec 27 06:05:57 crc kubenswrapper[4760]: I1227 06:05:57.663344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2t87g" event={"ID":"6d335d87-9e3c-4826-bea2-ec9884fde6e0","Type":"ContainerDied","Data":"bad73730055928a71f66a5d77c61c785e71a4ee8586cb7d2ba91242c0dc02267"} Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.041567 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.208004 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwx9l\" (UniqueName: \"kubernetes.io/projected/6d335d87-9e3c-4826-bea2-ec9884fde6e0-kube-api-access-rwx9l\") pod \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.208162 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-config-data\") pod \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.208186 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-combined-ca-bundle\") pod \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\" (UID: \"6d335d87-9e3c-4826-bea2-ec9884fde6e0\") " Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.214205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d335d87-9e3c-4826-bea2-ec9884fde6e0-kube-api-access-rwx9l" (OuterVolumeSpecName: "kube-api-access-rwx9l") pod "6d335d87-9e3c-4826-bea2-ec9884fde6e0" (UID: "6d335d87-9e3c-4826-bea2-ec9884fde6e0"). InnerVolumeSpecName "kube-api-access-rwx9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.229065 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d335d87-9e3c-4826-bea2-ec9884fde6e0" (UID: "6d335d87-9e3c-4826-bea2-ec9884fde6e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.252955 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-config-data" (OuterVolumeSpecName: "config-data") pod "6d335d87-9e3c-4826-bea2-ec9884fde6e0" (UID: "6d335d87-9e3c-4826-bea2-ec9884fde6e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.310601 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwx9l\" (UniqueName: \"kubernetes.io/projected/6d335d87-9e3c-4826-bea2-ec9884fde6e0-kube-api-access-rwx9l\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.310684 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-config-data\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.310713 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d335d87-9e3c-4826-bea2-ec9884fde6e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.681077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2t87g" event={"ID":"6d335d87-9e3c-4826-bea2-ec9884fde6e0","Type":"ContainerDied","Data":"48cd7ef817a574a9202ca47435121f7d5c250e6af72db9c455a2af3f5490ba59"} Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.681153 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48cd7ef817a574a9202ca47435121f7d5c250e6af72db9c455a2af3f5490ba59" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.681229 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2t87g" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.918316 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-lktk7"] Dec 27 06:05:59 crc kubenswrapper[4760]: E1227 06:05:59.918716 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d335d87-9e3c-4826-bea2-ec9884fde6e0" containerName="keystone-db-sync" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.918736 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d335d87-9e3c-4826-bea2-ec9884fde6e0" containerName="keystone-db-sync" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.918923 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d335d87-9e3c-4826-bea2-ec9884fde6e0" containerName="keystone-db-sync" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.919555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.921722 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.924268 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.924422 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.924295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-chrfw" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.924718 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Dec 27 06:05:59 crc kubenswrapper[4760]: I1227 06:05:59.929667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-lktk7"] Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.021158 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-scripts\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.021264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-config-data\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.021310 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-credential-keys\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.021347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-combined-ca-bundle\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.021435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-fernet-keys\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.021499 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qj68\" (UniqueName: \"kubernetes.io/projected/913d576a-c66e-4335-9b89-d863017e31d5-kube-api-access-2qj68\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.122323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-credential-keys\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.122410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-combined-ca-bundle\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.122897 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-fernet-keys\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.122952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qj68\" (UniqueName: \"kubernetes.io/projected/913d576a-c66e-4335-9b89-d863017e31d5-kube-api-access-2qj68\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.122997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-scripts\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.123041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-config-data\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.126415 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-credential-keys\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.126612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-combined-ca-bundle\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.127175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-config-data\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.130454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-scripts\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.130753 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-fernet-keys\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.148766 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qj68\" (UniqueName: \"kubernetes.io/projected/913d576a-c66e-4335-9b89-d863017e31d5-kube-api-access-2qj68\") pod \"keystone-bootstrap-lktk7\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.192012 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-gjkm5"] Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.192989 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.195937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.196451 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-gwlcl" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.196453 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.205030 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-gjkm5"] Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.224211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c288547a-4460-45d1-aad0-c311b34e2a6c-logs\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.225386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4hz\" (UniqueName: \"kubernetes.io/projected/c288547a-4460-45d1-aad0-c311b34e2a6c-kube-api-access-qx4hz\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.225454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-config-data\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.225498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-scripts\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.225525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-combined-ca-bundle\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.241890 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.330084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx4hz\" (UniqueName: \"kubernetes.io/projected/c288547a-4460-45d1-aad0-c311b34e2a6c-kube-api-access-qx4hz\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.330503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-config-data\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.330556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-scripts\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.330591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-combined-ca-bundle\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.330659 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c288547a-4460-45d1-aad0-c311b34e2a6c-logs\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.332105 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c288547a-4460-45d1-aad0-c311b34e2a6c-logs\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.334174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-combined-ca-bundle\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.334765 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-scripts\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.335454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-config-data\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.348706 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx4hz\" (UniqueName: \"kubernetes.io/projected/c288547a-4460-45d1-aad0-c311b34e2a6c-kube-api-access-qx4hz\") pod \"placement-db-sync-gjkm5\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.514459 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.697689 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-lktk7"] Dec 27 06:06:00 crc kubenswrapper[4760]: W1227 06:06:00.923544 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc288547a_4460_45d1_aad0_c311b34e2a6c.slice/crio-bc201d4fc2f0164ace7ccf64202b033d24806c11bd3804ffb61d18e5a9994427 WatchSource:0}: Error finding container bc201d4fc2f0164ace7ccf64202b033d24806c11bd3804ffb61d18e5a9994427: Status 404 returned error can't find the container with id bc201d4fc2f0164ace7ccf64202b033d24806c11bd3804ffb61d18e5a9994427 Dec 27 06:06:00 crc kubenswrapper[4760]: I1227 06:06:00.924493 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-gjkm5"] Dec 27 06:06:01 crc kubenswrapper[4760]: I1227 06:06:01.703708 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-lktk7" event={"ID":"913d576a-c66e-4335-9b89-d863017e31d5","Type":"ContainerStarted","Data":"1c79392ca1ae462f2a518acfe4683c290bbc8c87d2a1ae253711e325d8d9dc0a"} Dec 27 06:06:01 crc kubenswrapper[4760]: I1227 06:06:01.704074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-lktk7" event={"ID":"913d576a-c66e-4335-9b89-d863017e31d5","Type":"ContainerStarted","Data":"e3493789e5eb54bd66fbc9b57e49765ba940b010daedb8e4643ffa8ace537f6a"} Dec 27 06:06:01 crc kubenswrapper[4760]: I1227 06:06:01.707863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-gjkm5" event={"ID":"c288547a-4460-45d1-aad0-c311b34e2a6c","Type":"ContainerStarted","Data":"bc201d4fc2f0164ace7ccf64202b033d24806c11bd3804ffb61d18e5a9994427"} Dec 27 06:06:01 crc kubenswrapper[4760]: I1227 06:06:01.735256 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-lktk7" podStartSLOduration=2.7352298900000003 podStartE2EDuration="2.73522989s" podCreationTimestamp="2025-12-27 06:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:06:01.723268238 +0000 UTC m=+1284.483337603" watchObservedRunningTime="2025-12-27 06:06:01.73522989 +0000 UTC m=+1284.495299245" Dec 27 06:06:08 crc kubenswrapper[4760]: I1227 06:06:08.764737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-gjkm5" event={"ID":"c288547a-4460-45d1-aad0-c311b34e2a6c","Type":"ContainerStarted","Data":"9f8797c72f9d3f75ac55915efe640da116c17c66edfde2b0f6bbd177e3fb695d"} Dec 27 06:06:08 crc kubenswrapper[4760]: I1227 06:06:08.766791 4760 generic.go:334] "Generic (PLEG): container finished" podID="913d576a-c66e-4335-9b89-d863017e31d5" containerID="1c79392ca1ae462f2a518acfe4683c290bbc8c87d2a1ae253711e325d8d9dc0a" exitCode=0 Dec 27 06:06:08 crc kubenswrapper[4760]: I1227 06:06:08.766834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-lktk7" event={"ID":"913d576a-c66e-4335-9b89-d863017e31d5","Type":"ContainerDied","Data":"1c79392ca1ae462f2a518acfe4683c290bbc8c87d2a1ae253711e325d8d9dc0a"} Dec 27 06:06:08 crc kubenswrapper[4760]: I1227 06:06:08.791545 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-gjkm5" podStartSLOduration=2.033340572 podStartE2EDuration="8.791518472s" podCreationTimestamp="2025-12-27 06:06:00 +0000 UTC" firstStartedPulling="2025-12-27 06:06:00.928534094 +0000 UTC m=+1283.688603409" lastFinishedPulling="2025-12-27 06:06:07.686711994 +0000 UTC m=+1290.446781309" observedRunningTime="2025-12-27 06:06:08.789224856 +0000 UTC m=+1291.549294171" watchObservedRunningTime="2025-12-27 06:06:08.791518472 +0000 UTC m=+1291.551587827" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.118315 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.213749 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-fernet-keys\") pod \"913d576a-c66e-4335-9b89-d863017e31d5\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.213819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-credential-keys\") pod \"913d576a-c66e-4335-9b89-d863017e31d5\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.213879 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-combined-ca-bundle\") pod \"913d576a-c66e-4335-9b89-d863017e31d5\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.213907 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-scripts\") pod \"913d576a-c66e-4335-9b89-d863017e31d5\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.213941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qj68\" (UniqueName: \"kubernetes.io/projected/913d576a-c66e-4335-9b89-d863017e31d5-kube-api-access-2qj68\") pod \"913d576a-c66e-4335-9b89-d863017e31d5\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.213969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-config-data\") pod \"913d576a-c66e-4335-9b89-d863017e31d5\" (UID: \"913d576a-c66e-4335-9b89-d863017e31d5\") " Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.220461 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "913d576a-c66e-4335-9b89-d863017e31d5" (UID: "913d576a-c66e-4335-9b89-d863017e31d5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.220503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-scripts" (OuterVolumeSpecName: "scripts") pod "913d576a-c66e-4335-9b89-d863017e31d5" (UID: "913d576a-c66e-4335-9b89-d863017e31d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.221158 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913d576a-c66e-4335-9b89-d863017e31d5-kube-api-access-2qj68" (OuterVolumeSpecName: "kube-api-access-2qj68") pod "913d576a-c66e-4335-9b89-d863017e31d5" (UID: "913d576a-c66e-4335-9b89-d863017e31d5"). InnerVolumeSpecName "kube-api-access-2qj68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.221587 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "913d576a-c66e-4335-9b89-d863017e31d5" (UID: "913d576a-c66e-4335-9b89-d863017e31d5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.236700 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-config-data" (OuterVolumeSpecName: "config-data") pod "913d576a-c66e-4335-9b89-d863017e31d5" (UID: "913d576a-c66e-4335-9b89-d863017e31d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.247603 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "913d576a-c66e-4335-9b89-d863017e31d5" (UID: "913d576a-c66e-4335-9b89-d863017e31d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.315288 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.315318 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.315327 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.315336 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qj68\" (UniqueName: \"kubernetes.io/projected/913d576a-c66e-4335-9b89-d863017e31d5-kube-api-access-2qj68\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.315400 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.315412 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/913d576a-c66e-4335-9b89-d863017e31d5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.800884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-lktk7" event={"ID":"913d576a-c66e-4335-9b89-d863017e31d5","Type":"ContainerDied","Data":"e3493789e5eb54bd66fbc9b57e49765ba940b010daedb8e4643ffa8ace537f6a"} Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.800910 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-lktk7" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.800922 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3493789e5eb54bd66fbc9b57e49765ba940b010daedb8e4643ffa8ace537f6a" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.805192 4760 generic.go:334] "Generic (PLEG): container finished" podID="c288547a-4460-45d1-aad0-c311b34e2a6c" containerID="9f8797c72f9d3f75ac55915efe640da116c17c66edfde2b0f6bbd177e3fb695d" exitCode=0 Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.805252 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-gjkm5" event={"ID":"c288547a-4460-45d1-aad0-c311b34e2a6c","Type":"ContainerDied","Data":"9f8797c72f9d3f75ac55915efe640da116c17c66edfde2b0f6bbd177e3fb695d"} Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.897426 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-lktk7"] Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.923042 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-lktk7"] Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.978898 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-pws6r"] Dec 27 06:06:10 crc kubenswrapper[4760]: E1227 06:06:10.979288 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913d576a-c66e-4335-9b89-d863017e31d5" containerName="keystone-bootstrap" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.979313 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="913d576a-c66e-4335-9b89-d863017e31d5" containerName="keystone-bootstrap" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.979490 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="913d576a-c66e-4335-9b89-d863017e31d5" containerName="keystone-bootstrap" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.980115 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.981417 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.981812 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.982040 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-chrfw" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.982934 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Dec 27 06:06:10 crc kubenswrapper[4760]: I1227 06:06:10.983805 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:10.987474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-pws6r"] Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.130166 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5c5\" (UniqueName: \"kubernetes.io/projected/9932da50-7e9f-48e5-a9a0-236cc084abb7-kube-api-access-hz5c5\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.130231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-combined-ca-bundle\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.130308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-scripts\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.130364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-config-data\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.130417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-fernet-keys\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.130508 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-credential-keys\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.231364 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-credential-keys\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.231467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-combined-ca-bundle\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.231493 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5c5\" (UniqueName: \"kubernetes.io/projected/9932da50-7e9f-48e5-a9a0-236cc084abb7-kube-api-access-hz5c5\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.231517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-scripts\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.231543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-config-data\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.231576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-fernet-keys\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.236883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-fernet-keys\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.237405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-credential-keys\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.237749 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-combined-ca-bundle\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.238914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-scripts\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.241880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-config-data\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.270446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5c5\" (UniqueName: \"kubernetes.io/projected/9932da50-7e9f-48e5-a9a0-236cc084abb7-kube-api-access-hz5c5\") pod \"keystone-bootstrap-pws6r\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.353185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.521726 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913d576a-c66e-4335-9b89-d863017e31d5" path="/var/lib/kubelet/pods/913d576a-c66e-4335-9b89-d863017e31d5/volumes" Dec 27 06:06:11 crc kubenswrapper[4760]: I1227 06:06:11.856604 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-pws6r"] Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.086135 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.269877 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-scripts\") pod \"c288547a-4460-45d1-aad0-c311b34e2a6c\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.269984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c288547a-4460-45d1-aad0-c311b34e2a6c-logs\") pod \"c288547a-4460-45d1-aad0-c311b34e2a6c\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.270115 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4hz\" (UniqueName: \"kubernetes.io/projected/c288547a-4460-45d1-aad0-c311b34e2a6c-kube-api-access-qx4hz\") pod \"c288547a-4460-45d1-aad0-c311b34e2a6c\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.270147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-config-data\") pod \"c288547a-4460-45d1-aad0-c311b34e2a6c\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.270179 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-combined-ca-bundle\") pod \"c288547a-4460-45d1-aad0-c311b34e2a6c\" (UID: \"c288547a-4460-45d1-aad0-c311b34e2a6c\") " Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.270375 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c288547a-4460-45d1-aad0-c311b34e2a6c-logs" (OuterVolumeSpecName: "logs") pod "c288547a-4460-45d1-aad0-c311b34e2a6c" (UID: "c288547a-4460-45d1-aad0-c311b34e2a6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.270533 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c288547a-4460-45d1-aad0-c311b34e2a6c-logs\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.274870 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c288547a-4460-45d1-aad0-c311b34e2a6c-kube-api-access-qx4hz" (OuterVolumeSpecName: "kube-api-access-qx4hz") pod "c288547a-4460-45d1-aad0-c311b34e2a6c" (UID: "c288547a-4460-45d1-aad0-c311b34e2a6c"). InnerVolumeSpecName "kube-api-access-qx4hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.275186 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-scripts" (OuterVolumeSpecName: "scripts") pod "c288547a-4460-45d1-aad0-c311b34e2a6c" (UID: "c288547a-4460-45d1-aad0-c311b34e2a6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.289347 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c288547a-4460-45d1-aad0-c311b34e2a6c" (UID: "c288547a-4460-45d1-aad0-c311b34e2a6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.291365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-config-data" (OuterVolumeSpecName: "config-data") pod "c288547a-4460-45d1-aad0-c311b34e2a6c" (UID: "c288547a-4460-45d1-aad0-c311b34e2a6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.372664 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4hz\" (UniqueName: \"kubernetes.io/projected/c288547a-4460-45d1-aad0-c311b34e2a6c-kube-api-access-qx4hz\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.372773 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-config-data\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.372799 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.372809 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c288547a-4460-45d1-aad0-c311b34e2a6c-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.823752 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-pws6r" event={"ID":"9932da50-7e9f-48e5-a9a0-236cc084abb7","Type":"ContainerStarted","Data":"053d5e087d7b78a168164cb8be104e1ffb9a573d1f8eb0d4242386bfd5680e9f"} Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.823816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-pws6r" event={"ID":"9932da50-7e9f-48e5-a9a0-236cc084abb7","Type":"ContainerStarted","Data":"c065252845f582408da240a54f3a2440dc6793a0ae073019baecf3de247095c2"} Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.825918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-gjkm5" event={"ID":"c288547a-4460-45d1-aad0-c311b34e2a6c","Type":"ContainerDied","Data":"bc201d4fc2f0164ace7ccf64202b033d24806c11bd3804ffb61d18e5a9994427"} Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.825979 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-gjkm5" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.825980 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc201d4fc2f0164ace7ccf64202b033d24806c11bd3804ffb61d18e5a9994427" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.850789 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-pws6r" podStartSLOduration=2.850763766 podStartE2EDuration="2.850763766s" podCreationTimestamp="2025-12-27 06:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:06:12.84480493 +0000 UTC m=+1295.604874265" watchObservedRunningTime="2025-12-27 06:06:12.850763766 +0000 UTC m=+1295.610833121" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.925977 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-84867d49bd-zw5cr"] Dec 27 06:06:12 crc kubenswrapper[4760]: E1227 06:06:12.926454 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c288547a-4460-45d1-aad0-c311b34e2a6c" containerName="placement-db-sync" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.926484 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c288547a-4460-45d1-aad0-c311b34e2a6c" containerName="placement-db-sync" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.926766 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c288547a-4460-45d1-aad0-c311b34e2a6c" containerName="placement-db-sync" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.928206 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.930902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.931240 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-gwlcl" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.933481 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Dec 27 06:06:12 crc kubenswrapper[4760]: I1227 06:06:12.949334 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-84867d49bd-zw5cr"] Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.083163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kbmb\" (UniqueName: \"kubernetes.io/projected/14954fd5-cd52-4027-a904-27bfb69d6c6d-kube-api-access-6kbmb\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.083214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14954fd5-cd52-4027-a904-27bfb69d6c6d-logs\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.083305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-scripts\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.083335 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-combined-ca-bundle\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.083358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-config-data\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.187956 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kbmb\" (UniqueName: \"kubernetes.io/projected/14954fd5-cd52-4027-a904-27bfb69d6c6d-kube-api-access-6kbmb\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.188035 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14954fd5-cd52-4027-a904-27bfb69d6c6d-logs\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.188194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-scripts\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.188253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-combined-ca-bundle\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.188308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-config-data\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.189690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14954fd5-cd52-4027-a904-27bfb69d6c6d-logs\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.204159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-scripts\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.211278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-config-data\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.215174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kbmb\" (UniqueName: \"kubernetes.io/projected/14954fd5-cd52-4027-a904-27bfb69d6c6d-kube-api-access-6kbmb\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.215449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14954fd5-cd52-4027-a904-27bfb69d6c6d-combined-ca-bundle\") pod \"placement-84867d49bd-zw5cr\" (UID: \"14954fd5-cd52-4027-a904-27bfb69d6c6d\") " pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.251189 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.736958 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-84867d49bd-zw5cr"] Dec 27 06:06:13 crc kubenswrapper[4760]: W1227 06:06:13.738924 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14954fd5_cd52_4027_a904_27bfb69d6c6d.slice/crio-9a9426e4637d6b2e573b247211a983f8caa85ce2f46a68dd5520d2b731328731 WatchSource:0}: Error finding container 9a9426e4637d6b2e573b247211a983f8caa85ce2f46a68dd5520d2b731328731: Status 404 returned error can't find the container with id 9a9426e4637d6b2e573b247211a983f8caa85ce2f46a68dd5520d2b731328731 Dec 27 06:06:13 crc kubenswrapper[4760]: I1227 06:06:13.834432 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" event={"ID":"14954fd5-cd52-4027-a904-27bfb69d6c6d","Type":"ContainerStarted","Data":"9a9426e4637d6b2e573b247211a983f8caa85ce2f46a68dd5520d2b731328731"} Dec 27 06:06:14 crc kubenswrapper[4760]: I1227 06:06:14.847452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" event={"ID":"14954fd5-cd52-4027-a904-27bfb69d6c6d","Type":"ContainerStarted","Data":"695195a96b19dad005b1f5620f4c43cd2198762f0ce70a698560d5521010132a"} Dec 27 06:06:14 crc kubenswrapper[4760]: I1227 06:06:14.847776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" event={"ID":"14954fd5-cd52-4027-a904-27bfb69d6c6d","Type":"ContainerStarted","Data":"469fa5a49d56371f824c754977acf5f045583b1bd2e73dbc212e0d4a4fd28c42"} Dec 27 06:06:14 crc kubenswrapper[4760]: I1227 06:06:14.849142 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:14 crc kubenswrapper[4760]: I1227 06:06:14.849222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:14 crc kubenswrapper[4760]: I1227 06:06:14.869150 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" podStartSLOduration=2.86913463 podStartE2EDuration="2.86913463s" podCreationTimestamp="2025-12-27 06:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:06:14.865585763 +0000 UTC m=+1297.625655098" watchObservedRunningTime="2025-12-27 06:06:14.86913463 +0000 UTC m=+1297.629203945" Dec 27 06:06:15 crc kubenswrapper[4760]: I1227 06:06:15.861799 4760 generic.go:334] "Generic (PLEG): container finished" podID="9932da50-7e9f-48e5-a9a0-236cc084abb7" containerID="053d5e087d7b78a168164cb8be104e1ffb9a573d1f8eb0d4242386bfd5680e9f" exitCode=0 Dec 27 06:06:15 crc kubenswrapper[4760]: I1227 06:06:15.861882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-pws6r" event={"ID":"9932da50-7e9f-48e5-a9a0-236cc084abb7","Type":"ContainerDied","Data":"053d5e087d7b78a168164cb8be104e1ffb9a573d1f8eb0d4242386bfd5680e9f"} Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.246352 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.366859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-config-data\") pod \"9932da50-7e9f-48e5-a9a0-236cc084abb7\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.366976 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-fernet-keys\") pod \"9932da50-7e9f-48e5-a9a0-236cc084abb7\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.367025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-combined-ca-bundle\") pod \"9932da50-7e9f-48e5-a9a0-236cc084abb7\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.367105 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz5c5\" (UniqueName: \"kubernetes.io/projected/9932da50-7e9f-48e5-a9a0-236cc084abb7-kube-api-access-hz5c5\") pod \"9932da50-7e9f-48e5-a9a0-236cc084abb7\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.367136 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-credential-keys\") pod \"9932da50-7e9f-48e5-a9a0-236cc084abb7\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.367224 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-scripts\") pod \"9932da50-7e9f-48e5-a9a0-236cc084abb7\" (UID: \"9932da50-7e9f-48e5-a9a0-236cc084abb7\") " Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.372445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-scripts" (OuterVolumeSpecName: "scripts") pod "9932da50-7e9f-48e5-a9a0-236cc084abb7" (UID: "9932da50-7e9f-48e5-a9a0-236cc084abb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.372873 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9932da50-7e9f-48e5-a9a0-236cc084abb7" (UID: "9932da50-7e9f-48e5-a9a0-236cc084abb7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.373123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9932da50-7e9f-48e5-a9a0-236cc084abb7" (UID: "9932da50-7e9f-48e5-a9a0-236cc084abb7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.379348 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9932da50-7e9f-48e5-a9a0-236cc084abb7-kube-api-access-hz5c5" (OuterVolumeSpecName: "kube-api-access-hz5c5") pod "9932da50-7e9f-48e5-a9a0-236cc084abb7" (UID: "9932da50-7e9f-48e5-a9a0-236cc084abb7"). InnerVolumeSpecName "kube-api-access-hz5c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.389564 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-config-data" (OuterVolumeSpecName: "config-data") pod "9932da50-7e9f-48e5-a9a0-236cc084abb7" (UID: "9932da50-7e9f-48e5-a9a0-236cc084abb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.392467 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9932da50-7e9f-48e5-a9a0-236cc084abb7" (UID: "9932da50-7e9f-48e5-a9a0-236cc084abb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.469701 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-config-data\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.469753 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.469776 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.469795 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz5c5\" (UniqueName: \"kubernetes.io/projected/9932da50-7e9f-48e5-a9a0-236cc084abb7-kube-api-access-hz5c5\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.469813 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.469830 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9932da50-7e9f-48e5-a9a0-236cc084abb7-scripts\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.882138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-pws6r" event={"ID":"9932da50-7e9f-48e5-a9a0-236cc084abb7","Type":"ContainerDied","Data":"c065252845f582408da240a54f3a2440dc6793a0ae073019baecf3de247095c2"} Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.882510 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c065252845f582408da240a54f3a2440dc6793a0ae073019baecf3de247095c2" Dec 27 06:06:17 crc kubenswrapper[4760]: I1227 06:06:17.882305 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-pws6r" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.000635 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-7d4784c744-hzvm6"] Dec 27 06:06:18 crc kubenswrapper[4760]: E1227 06:06:18.001653 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9932da50-7e9f-48e5-a9a0-236cc084abb7" containerName="keystone-bootstrap" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.001790 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9932da50-7e9f-48e5-a9a0-236cc084abb7" containerName="keystone-bootstrap" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.002054 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9932da50-7e9f-48e5-a9a0-236cc084abb7" containerName="keystone-bootstrap" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.002741 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.005463 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-chrfw" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.006052 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.006418 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.006638 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.020006 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-7d4784c744-hzvm6"] Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.178729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-combined-ca-bundle\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.178779 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-fernet-keys\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.179329 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ptr\" (UniqueName: \"kubernetes.io/projected/61704514-50fd-411e-8fb5-93bcf85fc4df-kube-api-access-n5ptr\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.179605 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-scripts\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.179744 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-credential-keys\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.179911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-config-data\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.281203 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-config-data\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.282214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-combined-ca-bundle\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.282342 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-fernet-keys\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.282450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ptr\" (UniqueName: \"kubernetes.io/projected/61704514-50fd-411e-8fb5-93bcf85fc4df-kube-api-access-n5ptr\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.282599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-scripts\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.282702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-credential-keys\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.287863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-credential-keys\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.287947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-combined-ca-bundle\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.288052 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-fernet-keys\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.289282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-scripts\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.291538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61704514-50fd-411e-8fb5-93bcf85fc4df-config-data\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.321220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ptr\" (UniqueName: \"kubernetes.io/projected/61704514-50fd-411e-8fb5-93bcf85fc4df-kube-api-access-n5ptr\") pod \"keystone-7d4784c744-hzvm6\" (UID: \"61704514-50fd-411e-8fb5-93bcf85fc4df\") " pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.349494 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:18 crc kubenswrapper[4760]: W1227 06:06:18.865818 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61704514_50fd_411e_8fb5_93bcf85fc4df.slice/crio-fa95627a0b0d564243e54e24658b94cfbeed1c140ea8231d88d719befaa72b1f WatchSource:0}: Error finding container fa95627a0b0d564243e54e24658b94cfbeed1c140ea8231d88d719befaa72b1f: Status 404 returned error can't find the container with id fa95627a0b0d564243e54e24658b94cfbeed1c140ea8231d88d719befaa72b1f Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.872477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-7d4784c744-hzvm6"] Dec 27 06:06:18 crc kubenswrapper[4760]: I1227 06:06:18.889732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" event={"ID":"61704514-50fd-411e-8fb5-93bcf85fc4df","Type":"ContainerStarted","Data":"fa95627a0b0d564243e54e24658b94cfbeed1c140ea8231d88d719befaa72b1f"} Dec 27 06:06:19 crc kubenswrapper[4760]: I1227 06:06:19.897617 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" event={"ID":"61704514-50fd-411e-8fb5-93bcf85fc4df","Type":"ContainerStarted","Data":"bbe9d55c1477b068d61544ba6835c14930cb2cb3117dbfdc0357db7573407b40"} Dec 27 06:06:19 crc kubenswrapper[4760]: I1227 06:06:19.898250 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:19 crc kubenswrapper[4760]: I1227 06:06:19.915807 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" podStartSLOduration=2.915781719 podStartE2EDuration="2.915781719s" podCreationTimestamp="2025-12-27 06:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:06:19.911931815 +0000 UTC m=+1302.672001140" watchObservedRunningTime="2025-12-27 06:06:19.915781719 +0000 UTC m=+1302.675851034" Dec 27 06:06:45 crc kubenswrapper[4760]: I1227 06:06:45.074696 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:45 crc kubenswrapper[4760]: I1227 06:06:45.128922 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-84867d49bd-zw5cr" Dec 27 06:06:49 crc kubenswrapper[4760]: I1227 06:06:49.796197 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-7d4784c744-hzvm6" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.001043 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.003531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.009631 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.009752 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-7p7bp" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.009763 4760 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.022858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.151773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9bhm\" (UniqueName: \"kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.151867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config-secret\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.151923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.151968 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.212312 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:53 crc kubenswrapper[4760]: E1227 06:06:53.212850 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-m9bhm openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="nova-kuttl-default/openstackclient" podUID="6572eab4-812e-4fa8-90fc-112c29b11faf" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.222788 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.256490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9bhm\" (UniqueName: \"kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.256577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config-secret\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.256617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.256660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.257668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: E1227 06:06:53.258330 4760 projected.go:194] Error preparing data for projected volume kube-api-access-m9bhm for pod nova-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6572eab4-812e-4fa8-90fc-112c29b11faf) does not match the UID in record. The object might have been deleted and then recreated Dec 27 06:06:53 crc kubenswrapper[4760]: E1227 06:06:53.258389 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm podName:6572eab4-812e-4fa8-90fc-112c29b11faf nodeName:}" failed. No retries permitted until 2025-12-27 06:06:53.758369952 +0000 UTC m=+1336.518439267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m9bhm" (UniqueName: "kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm") pod "openstackclient" (UID: "6572eab4-812e-4fa8-90fc-112c29b11faf") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6572eab4-812e-4fa8-90fc-112c29b11faf) does not match the UID in record. The object might have been deleted and then recreated Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.261717 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.262031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config-secret\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.262721 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.262759 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.280700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.459878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvjg\" (UniqueName: \"kubernetes.io/projected/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-kube-api-access-lcvjg\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.460085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.460143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-openstack-config\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.460230 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.561552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.561944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-openstack-config\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.561980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.562059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvjg\" (UniqueName: \"kubernetes.io/projected/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-kube-api-access-lcvjg\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.565606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.566169 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.567860 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-openstack-config\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.578582 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvjg\" (UniqueName: \"kubernetes.io/projected/1d85013d-1d0a-4d2a-8322-7fc10a3745b7-kube-api-access-lcvjg\") pod \"openstackclient\" (UID: \"1d85013d-1d0a-4d2a-8322-7fc10a3745b7\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.626500 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: I1227 06:06:53.766592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9bhm\" (UniqueName: \"kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm\") pod \"openstackclient\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " pod="nova-kuttl-default/openstackclient" Dec 27 06:06:53 crc kubenswrapper[4760]: E1227 06:06:53.769924 4760 projected.go:194] Error preparing data for projected volume kube-api-access-m9bhm for pod nova-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6572eab4-812e-4fa8-90fc-112c29b11faf) does not match the UID in record. The object might have been deleted and then recreated Dec 27 06:06:53 crc kubenswrapper[4760]: E1227 06:06:53.769998 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm podName:6572eab4-812e-4fa8-90fc-112c29b11faf nodeName:}" failed. No retries permitted until 2025-12-27 06:06:54.769978244 +0000 UTC m=+1337.530047559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m9bhm" (UniqueName: "kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm") pod "openstackclient" (UID: "6572eab4-812e-4fa8-90fc-112c29b11faf") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6572eab4-812e-4fa8-90fc-112c29b11faf) does not match the UID in record. The object might have been deleted and then recreated Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.077871 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.211129 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"1d85013d-1d0a-4d2a-8322-7fc10a3745b7","Type":"ContainerStarted","Data":"96f82d609a868a2e65c3b5c9f490d403e2b078f87a9a4415f3175f4c40c57f39"} Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.211206 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.223279 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.226568 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="nova-kuttl-default/openstackclient" oldPodUID="6572eab4-812e-4fa8-90fc-112c29b11faf" podUID="1d85013d-1d0a-4d2a-8322-7fc10a3745b7" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.275337 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9bhm\" (UniqueName: \"kubernetes.io/projected/6572eab4-812e-4fa8-90fc-112c29b11faf-kube-api-access-m9bhm\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.376630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config-secret\") pod \"6572eab4-812e-4fa8-90fc-112c29b11faf\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.376905 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-combined-ca-bundle\") pod \"6572eab4-812e-4fa8-90fc-112c29b11faf\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.377010 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config\") pod \"6572eab4-812e-4fa8-90fc-112c29b11faf\" (UID: \"6572eab4-812e-4fa8-90fc-112c29b11faf\") " Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.377668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6572eab4-812e-4fa8-90fc-112c29b11faf" (UID: "6572eab4-812e-4fa8-90fc-112c29b11faf"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.377824 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.383204 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6572eab4-812e-4fa8-90fc-112c29b11faf" (UID: "6572eab4-812e-4fa8-90fc-112c29b11faf"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.383255 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6572eab4-812e-4fa8-90fc-112c29b11faf" (UID: "6572eab4-812e-4fa8-90fc-112c29b11faf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.478727 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:54 crc kubenswrapper[4760]: I1227 06:06:54.478766 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6572eab4-812e-4fa8-90fc-112c29b11faf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:06:55 crc kubenswrapper[4760]: I1227 06:06:55.216379 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Dec 27 06:06:55 crc kubenswrapper[4760]: I1227 06:06:55.231265 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="nova-kuttl-default/openstackclient" oldPodUID="6572eab4-812e-4fa8-90fc-112c29b11faf" podUID="1d85013d-1d0a-4d2a-8322-7fc10a3745b7" Dec 27 06:06:55 crc kubenswrapper[4760]: I1227 06:06:55.526539 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6572eab4-812e-4fa8-90fc-112c29b11faf" path="/var/lib/kubelet/pods/6572eab4-812e-4fa8-90fc-112c29b11faf/volumes" Dec 27 06:07:03 crc kubenswrapper[4760]: I1227 06:07:03.307960 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"1d85013d-1d0a-4d2a-8322-7fc10a3745b7","Type":"ContainerStarted","Data":"d5ef3eaf8f7a8aa92eafc830308217c46f8dfbe948b98683784830fbe80130c5"} Dec 27 06:07:03 crc kubenswrapper[4760]: I1227 06:07:03.329639 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=1.3741084319999999 podStartE2EDuration="10.32961915s" podCreationTimestamp="2025-12-27 06:06:53 +0000 UTC" firstStartedPulling="2025-12-27 06:06:54.08722122 +0000 UTC m=+1336.847290535" lastFinishedPulling="2025-12-27 06:07:03.042731938 +0000 UTC m=+1345.802801253" observedRunningTime="2025-12-27 06:07:03.323658653 +0000 UTC m=+1346.083727978" watchObservedRunningTime="2025-12-27 06:07:03.32961915 +0000 UTC m=+1346.089688485" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.205763 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs"] Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.207801 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" podUID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" containerName="manager" containerID="cri-o://ed5966ab3b8a08423f1798d85257e02a9eeb75992ee70c4520674277555e482b" gracePeriod=10 Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.374167 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" containerID="ed5966ab3b8a08423f1798d85257e02a9eeb75992ee70c4520674277555e482b" exitCode=0 Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.374217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" event={"ID":"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96","Type":"ContainerDied","Data":"ed5966ab3b8a08423f1798d85257e02a9eeb75992ee70c4520674277555e482b"} Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.518219 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-mlkx6"] Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.519208 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.524343 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-xqcds" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.546054 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-mlkx6"] Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.690605 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnztd\" (UniqueName: \"kubernetes.io/projected/fa38a6f6-8500-4036-8837-ef5ea4b78dba-kube-api-access-vnztd\") pod \"nova-operator-index-mlkx6\" (UID: \"fa38a6f6-8500-4036-8837-ef5ea4b78dba\") " pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.782937 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz"] Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.783153 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" podUID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" containerName="operator" containerID="cri-o://7020c57cf6735267c558eefc400dd17ba2987e2d29d66b12f70326b205a50749" gracePeriod=10 Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.791921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnztd\" (UniqueName: \"kubernetes.io/projected/fa38a6f6-8500-4036-8837-ef5ea4b78dba-kube-api-access-vnztd\") pod \"nova-operator-index-mlkx6\" (UID: \"fa38a6f6-8500-4036-8837-ef5ea4b78dba\") " pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.807004 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn"] Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.807980 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.813570 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn"] Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.850954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnztd\" (UniqueName: \"kubernetes.io/projected/fa38a6f6-8500-4036-8837-ef5ea4b78dba-kube-api-access-vnztd\") pod \"nova-operator-index-mlkx6\" (UID: \"fa38a6f6-8500-4036-8837-ef5ea4b78dba\") " pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.892371 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" podUID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.69:8081/readyz\": dial tcp 10.217.0.69:8081: connect: connection refused" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.893037 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dh64\" (UniqueName: \"kubernetes.io/projected/90731f81-cc83-40ff-af06-25f6ea776753-kube-api-access-9dh64\") pod \"nova-operator-controller-manager-7fd66c86cd-9rkmn\" (UID: \"90731f81-cc83-40ff-af06-25f6ea776753\") " pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:10 crc kubenswrapper[4760]: I1227 06:07:10.994001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dh64\" (UniqueName: \"kubernetes.io/projected/90731f81-cc83-40ff-af06-25f6ea776753-kube-api-access-9dh64\") pod \"nova-operator-controller-manager-7fd66c86cd-9rkmn\" (UID: \"90731f81-cc83-40ff-af06-25f6ea776753\") " pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.024003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dh64\" (UniqueName: \"kubernetes.io/projected/90731f81-cc83-40ff-af06-25f6ea776753-kube-api-access-9dh64\") pod \"nova-operator-controller-manager-7fd66c86cd-9rkmn\" (UID: \"90731f81-cc83-40ff-af06-25f6ea776753\") " pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.140452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.255195 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.274405 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.411612 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q77p\" (UniqueName: \"kubernetes.io/projected/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96-kube-api-access-2q77p\") pod \"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96\" (UID: \"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96\") " Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.420773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96-kube-api-access-2q77p" (OuterVolumeSpecName: "kube-api-access-2q77p") pod "5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" (UID: "5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96"). InnerVolumeSpecName "kube-api-access-2q77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.442763 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" containerID="7020c57cf6735267c558eefc400dd17ba2987e2d29d66b12f70326b205a50749" exitCode=0 Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.442849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" event={"ID":"1e1dd6a2-f891-4b2f-9128-8935e8445bc0","Type":"ContainerDied","Data":"7020c57cf6735267c558eefc400dd17ba2987e2d29d66b12f70326b205a50749"} Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.442878 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" event={"ID":"1e1dd6a2-f891-4b2f-9128-8935e8445bc0","Type":"ContainerDied","Data":"ecdd2345e46857ef9cc47094ebfde4825e2f6a1c58453a2db8171c95f3767e0a"} Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.442889 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecdd2345e46857ef9cc47094ebfde4825e2f6a1c58453a2db8171c95f3767e0a" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.444750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" event={"ID":"5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96","Type":"ContainerDied","Data":"252bc2a7324305d7a588eef8e4d4f39aabd84cb3aa8a8fa20ecc03636a01ec31"} Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.444784 4760 scope.go:117] "RemoveContainer" containerID="ed5966ab3b8a08423f1798d85257e02a9eeb75992ee70c4520674277555e482b" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.444898 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.462400 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.517691 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs"] Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.517722 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs"] Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.518993 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l68pf\" (UniqueName: \"kubernetes.io/projected/1e1dd6a2-f891-4b2f-9128-8935e8445bc0-kube-api-access-l68pf\") pod \"1e1dd6a2-f891-4b2f-9128-8935e8445bc0\" (UID: \"1e1dd6a2-f891-4b2f-9128-8935e8445bc0\") " Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.519203 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q77p\" (UniqueName: \"kubernetes.io/projected/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96-kube-api-access-2q77p\") on node \"crc\" DevicePath \"\"" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.525190 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1dd6a2-f891-4b2f-9128-8935e8445bc0-kube-api-access-l68pf" (OuterVolumeSpecName: "kube-api-access-l68pf") pod "1e1dd6a2-f891-4b2f-9128-8935e8445bc0" (UID: "1e1dd6a2-f891-4b2f-9128-8935e8445bc0"). InnerVolumeSpecName "kube-api-access-l68pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.620986 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l68pf\" (UniqueName: \"kubernetes.io/projected/1e1dd6a2-f891-4b2f-9128-8935e8445bc0-kube-api-access-l68pf\") on node \"crc\" DevicePath \"\"" Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.821025 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-mlkx6"] Dec 27 06:07:11 crc kubenswrapper[4760]: W1227 06:07:11.827168 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa38a6f6_8500_4036_8837_ef5ea4b78dba.slice/crio-1f71b4f7172991196109de6fd5970e34ff0a98daec6d1e9fc623e9cb5e924916 WatchSource:0}: Error finding container 1f71b4f7172991196109de6fd5970e34ff0a98daec6d1e9fc623e9cb5e924916: Status 404 returned error can't find the container with id 1f71b4f7172991196109de6fd5970e34ff0a98daec6d1e9fc623e9cb5e924916 Dec 27 06:07:11 crc kubenswrapper[4760]: I1227 06:07:11.827682 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn"] Dec 27 06:07:11 crc kubenswrapper[4760]: W1227 06:07:11.838850 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90731f81_cc83_40ff_af06_25f6ea776753.slice/crio-a9e8bf3ee9cf5b28f095a2240a4b5b4f243514a7b75f0b26e377be345fbbe01b WatchSource:0}: Error finding container a9e8bf3ee9cf5b28f095a2240a4b5b4f243514a7b75f0b26e377be345fbbe01b: Status 404 returned error can't find the container with id a9e8bf3ee9cf5b28f095a2240a4b5b4f243514a7b75f0b26e377be345fbbe01b Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.251494 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-f46xs" podUID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.452968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-mlkx6" event={"ID":"fa38a6f6-8500-4036-8837-ef5ea4b78dba","Type":"ContainerStarted","Data":"34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f"} Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.454037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-mlkx6" event={"ID":"fa38a6f6-8500-4036-8837-ef5ea4b78dba","Type":"ContainerStarted","Data":"1f71b4f7172991196109de6fd5970e34ff0a98daec6d1e9fc623e9cb5e924916"} Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.454617 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" event={"ID":"90731f81-cc83-40ff-af06-25f6ea776753","Type":"ContainerStarted","Data":"3cf98180358ea9892d3839e0d52a55aca080521cceb1c2364db1c2db8b947116"} Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.454673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" event={"ID":"90731f81-cc83-40ff-af06-25f6ea776753","Type":"ContainerStarted","Data":"a9e8bf3ee9cf5b28f095a2240a4b5b4f243514a7b75f0b26e377be345fbbe01b"} Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.455403 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.456710 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz" Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.474253 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-mlkx6" podStartSLOduration=2.244418221 podStartE2EDuration="2.474230026s" podCreationTimestamp="2025-12-27 06:07:10 +0000 UTC" firstStartedPulling="2025-12-27 06:07:11.831383961 +0000 UTC m=+1354.591453276" lastFinishedPulling="2025-12-27 06:07:12.061195766 +0000 UTC m=+1354.821265081" observedRunningTime="2025-12-27 06:07:12.469653474 +0000 UTC m=+1355.229722789" watchObservedRunningTime="2025-12-27 06:07:12.474230026 +0000 UTC m=+1355.234299341" Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.493207 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" podStartSLOduration=2.49319032 podStartE2EDuration="2.49319032s" podCreationTimestamp="2025-12-27 06:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 06:07:12.487441239 +0000 UTC m=+1355.247510554" watchObservedRunningTime="2025-12-27 06:07:12.49319032 +0000 UTC m=+1355.253259635" Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.508474 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz"] Dec 27 06:07:12 crc kubenswrapper[4760]: I1227 06:07:12.513438 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7956f678b6-cxfvz"] Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.273167 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-mlkx6"] Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.514459 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" path="/var/lib/kubelet/pods/1e1dd6a2-f891-4b2f-9128-8935e8445bc0/volumes" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.515954 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" path="/var/lib/kubelet/pods/5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96/volumes" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.888046 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-jt5kk"] Dec 27 06:07:13 crc kubenswrapper[4760]: E1227 06:07:13.888600 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" containerName="operator" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.888642 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" containerName="operator" Dec 27 06:07:13 crc kubenswrapper[4760]: E1227 06:07:13.888700 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" containerName="manager" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.888717 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" containerName="manager" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.889035 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d46fdfd-85d1-4aa7-a8bc-0ec2c7eb6f96" containerName="manager" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.889075 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1dd6a2-f891-4b2f-9128-8935e8445bc0" containerName="operator" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.889958 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.925044 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-jt5kk"] Dec 27 06:07:13 crc kubenswrapper[4760]: I1227 06:07:13.954235 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh86h\" (UniqueName: \"kubernetes.io/projected/2ac71051-138e-44fa-a101-d00bd0811942-kube-api-access-rh86h\") pod \"nova-operator-index-jt5kk\" (UID: \"2ac71051-138e-44fa-a101-d00bd0811942\") " pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:14 crc kubenswrapper[4760]: I1227 06:07:14.055859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh86h\" (UniqueName: \"kubernetes.io/projected/2ac71051-138e-44fa-a101-d00bd0811942-kube-api-access-rh86h\") pod \"nova-operator-index-jt5kk\" (UID: \"2ac71051-138e-44fa-a101-d00bd0811942\") " pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:14 crc kubenswrapper[4760]: I1227 06:07:14.081545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh86h\" (UniqueName: \"kubernetes.io/projected/2ac71051-138e-44fa-a101-d00bd0811942-kube-api-access-rh86h\") pod \"nova-operator-index-jt5kk\" (UID: \"2ac71051-138e-44fa-a101-d00bd0811942\") " pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:14 crc kubenswrapper[4760]: I1227 06:07:14.221234 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:14 crc kubenswrapper[4760]: I1227 06:07:14.471222 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-mlkx6" podUID="fa38a6f6-8500-4036-8837-ef5ea4b78dba" containerName="registry-server" containerID="cri-o://34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f" gracePeriod=2 Dec 27 06:07:14 crc kubenswrapper[4760]: I1227 06:07:14.763651 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-jt5kk"] Dec 27 06:07:15 crc kubenswrapper[4760]: I1227 06:07:15.483723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-jt5kk" event={"ID":"2ac71051-138e-44fa-a101-d00bd0811942","Type":"ContainerStarted","Data":"7b38f21ddfaef844a081573a14d013fe0ade7350c95f0ce4a85ef9b0e51a53f3"} Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.010753 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.093585 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnztd\" (UniqueName: \"kubernetes.io/projected/fa38a6f6-8500-4036-8837-ef5ea4b78dba-kube-api-access-vnztd\") pod \"fa38a6f6-8500-4036-8837-ef5ea4b78dba\" (UID: \"fa38a6f6-8500-4036-8837-ef5ea4b78dba\") " Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.101409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa38a6f6-8500-4036-8837-ef5ea4b78dba-kube-api-access-vnztd" (OuterVolumeSpecName: "kube-api-access-vnztd") pod "fa38a6f6-8500-4036-8837-ef5ea4b78dba" (UID: "fa38a6f6-8500-4036-8837-ef5ea4b78dba"). InnerVolumeSpecName "kube-api-access-vnztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.196656 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnztd\" (UniqueName: \"kubernetes.io/projected/fa38a6f6-8500-4036-8837-ef5ea4b78dba-kube-api-access-vnztd\") on node \"crc\" DevicePath \"\"" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.494069 4760 generic.go:334] "Generic (PLEG): container finished" podID="fa38a6f6-8500-4036-8837-ef5ea4b78dba" containerID="34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f" exitCode=0 Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.494168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-mlkx6" event={"ID":"fa38a6f6-8500-4036-8837-ef5ea4b78dba","Type":"ContainerDied","Data":"34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f"} Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.494230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-mlkx6" event={"ID":"fa38a6f6-8500-4036-8837-ef5ea4b78dba","Type":"ContainerDied","Data":"1f71b4f7172991196109de6fd5970e34ff0a98daec6d1e9fc623e9cb5e924916"} Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.494253 4760 scope.go:117] "RemoveContainer" containerID="34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.494652 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-mlkx6" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.497327 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-jt5kk" event={"ID":"2ac71051-138e-44fa-a101-d00bd0811942","Type":"ContainerStarted","Data":"6c60c56b0fa32d1f2f3727db10989b185b0a0da5908acf1d05dccac07cc6f309"} Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.520452 4760 scope.go:117] "RemoveContainer" containerID="34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f" Dec 27 06:07:16 crc kubenswrapper[4760]: E1227 06:07:16.521514 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f\": container with ID starting with 34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f not found: ID does not exist" containerID="34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.521561 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f"} err="failed to get container status \"34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f\": rpc error: code = NotFound desc = could not find container \"34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f\": container with ID starting with 34ad41e67334667f58ced7149c5f148649d283997df5e3660fa63271cf218f5f not found: ID does not exist" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.522014 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-jt5kk" podStartSLOduration=2.640258332 podStartE2EDuration="3.521995825s" podCreationTimestamp="2025-12-27 06:07:13 +0000 UTC" firstStartedPulling="2025-12-27 06:07:14.771936388 +0000 UTC m=+1357.532005703" lastFinishedPulling="2025-12-27 06:07:15.653673881 +0000 UTC m=+1358.413743196" observedRunningTime="2025-12-27 06:07:16.518888959 +0000 UTC m=+1359.278958294" watchObservedRunningTime="2025-12-27 06:07:16.521995825 +0000 UTC m=+1359.282065150" Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.544931 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-mlkx6"] Dec 27 06:07:16 crc kubenswrapper[4760]: I1227 06:07:16.551217 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-mlkx6"] Dec 27 06:07:17 crc kubenswrapper[4760]: I1227 06:07:17.514766 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa38a6f6-8500-4036-8837-ef5ea4b78dba" path="/var/lib/kubelet/pods/fa38a6f6-8500-4036-8837-ef5ea4b78dba/volumes" Dec 27 06:07:21 crc kubenswrapper[4760]: I1227 06:07:21.277184 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7fd66c86cd-9rkmn" Dec 27 06:07:24 crc kubenswrapper[4760]: I1227 06:07:24.222582 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:24 crc kubenswrapper[4760]: I1227 06:07:24.223328 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:24 crc kubenswrapper[4760]: I1227 06:07:24.275733 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:24 crc kubenswrapper[4760]: I1227 06:07:24.607239 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-jt5kk" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.334481 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx"] Dec 27 06:07:27 crc kubenswrapper[4760]: E1227 06:07:27.335194 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa38a6f6-8500-4036-8837-ef5ea4b78dba" containerName="registry-server" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.335215 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa38a6f6-8500-4036-8837-ef5ea4b78dba" containerName="registry-server" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.335504 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa38a6f6-8500-4036-8837-ef5ea4b78dba" containerName="registry-server" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.337018 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.339407 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xnnf4" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.354907 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx"] Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.491149 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-util\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.491383 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-bundle\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.491458 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w726c\" (UniqueName: \"kubernetes.io/projected/556d72b2-b316-46cf-9094-370351d21aee-kube-api-access-w726c\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.593900 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-bundle\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.594031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w726c\" (UniqueName: \"kubernetes.io/projected/556d72b2-b316-46cf-9094-370351d21aee-kube-api-access-w726c\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.594081 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-util\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.595025 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-util\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.595043 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-bundle\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.625252 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w726c\" (UniqueName: \"kubernetes.io/projected/556d72b2-b316-46cf-9094-370351d21aee-kube-api-access-w726c\") pod \"d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:27 crc kubenswrapper[4760]: I1227 06:07:27.656531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:28 crc kubenswrapper[4760]: W1227 06:07:28.090353 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod556d72b2_b316_46cf_9094_370351d21aee.slice/crio-39cf43e3514890a261eb85f72c530a86470007c703636c4e2250fcb0cf294358 WatchSource:0}: Error finding container 39cf43e3514890a261eb85f72c530a86470007c703636c4e2250fcb0cf294358: Status 404 returned error can't find the container with id 39cf43e3514890a261eb85f72c530a86470007c703636c4e2250fcb0cf294358 Dec 27 06:07:28 crc kubenswrapper[4760]: I1227 06:07:28.090791 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx"] Dec 27 06:07:28 crc kubenswrapper[4760]: I1227 06:07:28.615171 4760 generic.go:334] "Generic (PLEG): container finished" podID="556d72b2-b316-46cf-9094-370351d21aee" containerID="e470574f4ab4bbe4827ce4ec9d5311cd11fd4f9544129b613b52d6e3e822a523" exitCode=0 Dec 27 06:07:28 crc kubenswrapper[4760]: I1227 06:07:28.615328 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" event={"ID":"556d72b2-b316-46cf-9094-370351d21aee","Type":"ContainerDied","Data":"e470574f4ab4bbe4827ce4ec9d5311cd11fd4f9544129b613b52d6e3e822a523"} Dec 27 06:07:28 crc kubenswrapper[4760]: I1227 06:07:28.615588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" event={"ID":"556d72b2-b316-46cf-9094-370351d21aee","Type":"ContainerStarted","Data":"39cf43e3514890a261eb85f72c530a86470007c703636c4e2250fcb0cf294358"} Dec 27 06:07:32 crc kubenswrapper[4760]: I1227 06:07:32.657228 4760 generic.go:334] "Generic (PLEG): container finished" podID="556d72b2-b316-46cf-9094-370351d21aee" containerID="fbba7dbe8842e4d1f6e406b66f2fbc4d666fd1ef4cb6467e068e35262e59d843" exitCode=0 Dec 27 06:07:32 crc kubenswrapper[4760]: I1227 06:07:32.657346 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" event={"ID":"556d72b2-b316-46cf-9094-370351d21aee","Type":"ContainerDied","Data":"fbba7dbe8842e4d1f6e406b66f2fbc4d666fd1ef4cb6467e068e35262e59d843"} Dec 27 06:07:33 crc kubenswrapper[4760]: I1227 06:07:33.671219 4760 generic.go:334] "Generic (PLEG): container finished" podID="556d72b2-b316-46cf-9094-370351d21aee" containerID="de6d2abf370a2be662f42e4e04d95706e8a245b71b0b543cbb3eb2d0567e1e9c" exitCode=0 Dec 27 06:07:33 crc kubenswrapper[4760]: I1227 06:07:33.671423 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" event={"ID":"556d72b2-b316-46cf-9094-370351d21aee","Type":"ContainerDied","Data":"de6d2abf370a2be662f42e4e04d95706e8a245b71b0b543cbb3eb2d0567e1e9c"} Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.095712 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.223714 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-bundle\") pod \"556d72b2-b316-46cf-9094-370351d21aee\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.223998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-util\") pod \"556d72b2-b316-46cf-9094-370351d21aee\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.224189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w726c\" (UniqueName: \"kubernetes.io/projected/556d72b2-b316-46cf-9094-370351d21aee-kube-api-access-w726c\") pod \"556d72b2-b316-46cf-9094-370351d21aee\" (UID: \"556d72b2-b316-46cf-9094-370351d21aee\") " Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.226283 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-bundle" (OuterVolumeSpecName: "bundle") pod "556d72b2-b316-46cf-9094-370351d21aee" (UID: "556d72b2-b316-46cf-9094-370351d21aee"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.229992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556d72b2-b316-46cf-9094-370351d21aee-kube-api-access-w726c" (OuterVolumeSpecName: "kube-api-access-w726c") pod "556d72b2-b316-46cf-9094-370351d21aee" (UID: "556d72b2-b316-46cf-9094-370351d21aee"). InnerVolumeSpecName "kube-api-access-w726c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.234502 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-util" (OuterVolumeSpecName: "util") pod "556d72b2-b316-46cf-9094-370351d21aee" (UID: "556d72b2-b316-46cf-9094-370351d21aee"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.287815 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.287963 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.325590 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-util\") on node \"crc\" DevicePath \"\"" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.325621 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w726c\" (UniqueName: \"kubernetes.io/projected/556d72b2-b316-46cf-9094-370351d21aee-kube-api-access-w726c\") on node \"crc\" DevicePath \"\"" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.325632 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/556d72b2-b316-46cf-9094-370351d21aee-bundle\") on node \"crc\" DevicePath \"\"" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.695113 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" event={"ID":"556d72b2-b316-46cf-9094-370351d21aee","Type":"ContainerDied","Data":"39cf43e3514890a261eb85f72c530a86470007c703636c4e2250fcb0cf294358"} Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.695424 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39cf43e3514890a261eb85f72c530a86470007c703636c4e2250fcb0cf294358" Dec 27 06:07:35 crc kubenswrapper[4760]: I1227 06:07:35.695228 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx" Dec 27 06:08:05 crc kubenswrapper[4760]: I1227 06:08:05.288223 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:08:05 crc kubenswrapper[4760]: I1227 06:08:05.288891 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:08:35 crc kubenswrapper[4760]: I1227 06:08:35.287825 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:08:35 crc kubenswrapper[4760]: I1227 06:08:35.288327 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:08:35 crc kubenswrapper[4760]: I1227 06:08:35.288373 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 06:08:35 crc kubenswrapper[4760]: I1227 06:08:35.288998 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"560a955296849a499da3938ec04851b01ba39103dd145de7716581a5c91fbb44"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 06:08:35 crc kubenswrapper[4760]: I1227 06:08:35.289053 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://560a955296849a499da3938ec04851b01ba39103dd145de7716581a5c91fbb44" gracePeriod=600 Dec 27 06:08:36 crc kubenswrapper[4760]: I1227 06:08:36.224158 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="560a955296849a499da3938ec04851b01ba39103dd145de7716581a5c91fbb44" exitCode=0 Dec 27 06:08:36 crc kubenswrapper[4760]: I1227 06:08:36.224347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"560a955296849a499da3938ec04851b01ba39103dd145de7716581a5c91fbb44"} Dec 27 06:08:36 crc kubenswrapper[4760]: I1227 06:08:36.224468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717"} Dec 27 06:08:36 crc kubenswrapper[4760]: I1227 06:08:36.224485 4760 scope.go:117] "RemoveContainer" containerID="3c94525c7f3f2fe0472a9dc16fbdb91fa7f9227d81da63569e521eb20968b308" Dec 27 06:08:44 crc kubenswrapper[4760]: I1227 06:08:44.083192 4760 scope.go:117] "RemoveContainer" containerID="7020c57cf6735267c558eefc400dd17ba2987e2d29d66b12f70326b205a50749" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.940006 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g9zvv"] Dec 27 06:08:48 crc kubenswrapper[4760]: E1227 06:08:48.940734 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="util" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.940745 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="util" Dec 27 06:08:48 crc kubenswrapper[4760]: E1227 06:08:48.940759 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="pull" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.940765 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="pull" Dec 27 06:08:48 crc kubenswrapper[4760]: E1227 06:08:48.940773 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="extract" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.940778 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="extract" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.940930 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d72b2-b316-46cf-9094-370351d21aee" containerName="extract" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.941922 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:48 crc kubenswrapper[4760]: I1227 06:08:48.961409 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9zvv"] Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.074856 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-utilities\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.074912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-catalog-content\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.075156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8t59\" (UniqueName: \"kubernetes.io/projected/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-kube-api-access-w8t59\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.177200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-catalog-content\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.177354 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8t59\" (UniqueName: \"kubernetes.io/projected/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-kube-api-access-w8t59\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.177492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-utilities\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.177935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-catalog-content\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.178002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-utilities\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.206596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8t59\" (UniqueName: \"kubernetes.io/projected/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-kube-api-access-w8t59\") pod \"community-operators-g9zvv\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.273305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:49 crc kubenswrapper[4760]: I1227 06:08:49.565850 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9zvv"] Dec 27 06:08:49 crc kubenswrapper[4760]: W1227 06:08:49.579353 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfccc5a8f_3e19_4459_86a8_c767e0a91a9d.slice/crio-f3ad40abb531b8c63c8bd1421e28f192314609f4da3c854d13c4c4d8e0acf568 WatchSource:0}: Error finding container f3ad40abb531b8c63c8bd1421e28f192314609f4da3c854d13c4c4d8e0acf568: Status 404 returned error can't find the container with id f3ad40abb531b8c63c8bd1421e28f192314609f4da3c854d13c4c4d8e0acf568 Dec 27 06:08:50 crc kubenswrapper[4760]: I1227 06:08:50.348741 4760 generic.go:334] "Generic (PLEG): container finished" podID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerID="bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d" exitCode=0 Dec 27 06:08:50 crc kubenswrapper[4760]: I1227 06:08:50.348781 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerDied","Data":"bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d"} Dec 27 06:08:50 crc kubenswrapper[4760]: I1227 06:08:50.348805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerStarted","Data":"f3ad40abb531b8c63c8bd1421e28f192314609f4da3c854d13c4c4d8e0acf568"} Dec 27 06:08:50 crc kubenswrapper[4760]: I1227 06:08:50.352921 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 27 06:08:51 crc kubenswrapper[4760]: I1227 06:08:51.384190 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerStarted","Data":"374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8"} Dec 27 06:08:52 crc kubenswrapper[4760]: I1227 06:08:52.392720 4760 generic.go:334] "Generic (PLEG): container finished" podID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerID="374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8" exitCode=0 Dec 27 06:08:52 crc kubenswrapper[4760]: I1227 06:08:52.392762 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerDied","Data":"374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8"} Dec 27 06:08:53 crc kubenswrapper[4760]: I1227 06:08:53.403170 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerStarted","Data":"29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897"} Dec 27 06:08:53 crc kubenswrapper[4760]: I1227 06:08:53.432076 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g9zvv" podStartSLOduration=2.852093456 podStartE2EDuration="5.432048142s" podCreationTimestamp="2025-12-27 06:08:48 +0000 UTC" firstStartedPulling="2025-12-27 06:08:50.35260791 +0000 UTC m=+1453.112677225" lastFinishedPulling="2025-12-27 06:08:52.932562596 +0000 UTC m=+1455.692631911" observedRunningTime="2025-12-27 06:08:53.4215707 +0000 UTC m=+1456.181640015" watchObservedRunningTime="2025-12-27 06:08:53.432048142 +0000 UTC m=+1456.192117477" Dec 27 06:08:59 crc kubenswrapper[4760]: I1227 06:08:59.273565 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:59 crc kubenswrapper[4760]: I1227 06:08:59.276481 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:59 crc kubenswrapper[4760]: I1227 06:08:59.333460 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:59 crc kubenswrapper[4760]: I1227 06:08:59.499127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:08:59 crc kubenswrapper[4760]: I1227 06:08:59.570496 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9zvv"] Dec 27 06:09:01 crc kubenswrapper[4760]: I1227 06:09:01.458904 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g9zvv" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="registry-server" containerID="cri-o://29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897" gracePeriod=2 Dec 27 06:09:01 crc kubenswrapper[4760]: I1227 06:09:01.964113 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:09:01 crc kubenswrapper[4760]: I1227 06:09:01.968325 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-utilities\") pod \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " Dec 27 06:09:01 crc kubenswrapper[4760]: I1227 06:09:01.969199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-utilities" (OuterVolumeSpecName: "utilities") pod "fccc5a8f-3e19-4459-86a8-c767e0a91a9d" (UID: "fccc5a8f-3e19-4459-86a8-c767e0a91a9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.069438 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8t59\" (UniqueName: \"kubernetes.io/projected/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-kube-api-access-w8t59\") pod \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.069538 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-catalog-content\") pod \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\" (UID: \"fccc5a8f-3e19-4459-86a8-c767e0a91a9d\") " Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.069819 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.075925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-kube-api-access-w8t59" (OuterVolumeSpecName: "kube-api-access-w8t59") pod "fccc5a8f-3e19-4459-86a8-c767e0a91a9d" (UID: "fccc5a8f-3e19-4459-86a8-c767e0a91a9d"). InnerVolumeSpecName "kube-api-access-w8t59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.136296 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fccc5a8f-3e19-4459-86a8-c767e0a91a9d" (UID: "fccc5a8f-3e19-4459-86a8-c767e0a91a9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.171635 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8t59\" (UniqueName: \"kubernetes.io/projected/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-kube-api-access-w8t59\") on node \"crc\" DevicePath \"\"" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.171667 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fccc5a8f-3e19-4459-86a8-c767e0a91a9d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.467670 4760 generic.go:334] "Generic (PLEG): container finished" podID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerID="29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897" exitCode=0 Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.467728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerDied","Data":"29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897"} Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.467772 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9zvv" event={"ID":"fccc5a8f-3e19-4459-86a8-c767e0a91a9d","Type":"ContainerDied","Data":"f3ad40abb531b8c63c8bd1421e28f192314609f4da3c854d13c4c4d8e0acf568"} Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.467793 4760 scope.go:117] "RemoveContainer" containerID="29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.467925 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9zvv" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.491404 4760 scope.go:117] "RemoveContainer" containerID="374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.518366 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9zvv"] Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.531600 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g9zvv"] Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.534365 4760 scope.go:117] "RemoveContainer" containerID="bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.584285 4760 scope.go:117] "RemoveContainer" containerID="29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897" Dec 27 06:09:02 crc kubenswrapper[4760]: E1227 06:09:02.590609 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897\": container with ID starting with 29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897 not found: ID does not exist" containerID="29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.590656 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897"} err="failed to get container status \"29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897\": rpc error: code = NotFound desc = could not find container \"29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897\": container with ID starting with 29f5d2a9a6bd43debbf7f4c30bb8c3f83d3a328313c7ca999d3bda9bf96fb897 not found: ID does not exist" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.590686 4760 scope.go:117] "RemoveContainer" containerID="374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8" Dec 27 06:09:02 crc kubenswrapper[4760]: E1227 06:09:02.595728 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8\": container with ID starting with 374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8 not found: ID does not exist" containerID="374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.595778 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8"} err="failed to get container status \"374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8\": rpc error: code = NotFound desc = could not find container \"374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8\": container with ID starting with 374f17c50435af68f94837d9af09168481544c4b01225884e562fa85013e44c8 not found: ID does not exist" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.595805 4760 scope.go:117] "RemoveContainer" containerID="bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d" Dec 27 06:09:02 crc kubenswrapper[4760]: E1227 06:09:02.596283 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d\": container with ID starting with bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d not found: ID does not exist" containerID="bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d" Dec 27 06:09:02 crc kubenswrapper[4760]: I1227 06:09:02.596327 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d"} err="failed to get container status \"bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d\": rpc error: code = NotFound desc = could not find container \"bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d\": container with ID starting with bb1e3975a22d0ee409f19d54547dac216475b3fadb17d81535f953d5c954ce9d not found: ID does not exist" Dec 27 06:09:03 crc kubenswrapper[4760]: I1227 06:09:03.517985 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" path="/var/lib/kubelet/pods/fccc5a8f-3e19-4459-86a8-c767e0a91a9d/volumes" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.789672 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4jcf2"] Dec 27 06:09:32 crc kubenswrapper[4760]: E1227 06:09:32.790558 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="registry-server" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.790573 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="registry-server" Dec 27 06:09:32 crc kubenswrapper[4760]: E1227 06:09:32.790608 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="extract-content" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.790615 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="extract-content" Dec 27 06:09:32 crc kubenswrapper[4760]: E1227 06:09:32.790627 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="extract-utilities" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.790634 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="extract-utilities" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.790828 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccc5a8f-3e19-4459-86a8-c767e0a91a9d" containerName="registry-server" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.791946 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.807343 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jcf2"] Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.959193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-catalog-content\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.959571 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-utilities\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:32 crc kubenswrapper[4760]: I1227 06:09:32.959645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqfb\" (UniqueName: \"kubernetes.io/projected/efa5ec93-c0c3-4be9-88b7-0c792e05437c-kube-api-access-rmqfb\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.061369 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-catalog-content\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.061724 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-utilities\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.061924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqfb\" (UniqueName: \"kubernetes.io/projected/efa5ec93-c0c3-4be9-88b7-0c792e05437c-kube-api-access-rmqfb\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.062057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-catalog-content\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.062261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-utilities\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.088271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqfb\" (UniqueName: \"kubernetes.io/projected/efa5ec93-c0c3-4be9-88b7-0c792e05437c-kube-api-access-rmqfb\") pod \"redhat-marketplace-4jcf2\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.110027 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.558821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jcf2"] Dec 27 06:09:33 crc kubenswrapper[4760]: I1227 06:09:33.732034 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerStarted","Data":"5c1e260eb6ae87d26c5c6b660453221e0e3d88cc680b09b811461c15247a66ee"} Dec 27 06:09:34 crc kubenswrapper[4760]: I1227 06:09:34.738819 4760 generic.go:334] "Generic (PLEG): container finished" podID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerID="b45888917194b074ab7beb4a93fc8dee27f62d91db7bc46e394669461c858ceb" exitCode=0 Dec 27 06:09:34 crc kubenswrapper[4760]: I1227 06:09:34.738929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerDied","Data":"b45888917194b074ab7beb4a93fc8dee27f62d91db7bc46e394669461c858ceb"} Dec 27 06:09:35 crc kubenswrapper[4760]: I1227 06:09:35.751308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerStarted","Data":"a795b021f6b2c601a1876606c28f84f884bb713addb4f83541392946663ef52b"} Dec 27 06:09:36 crc kubenswrapper[4760]: I1227 06:09:36.762302 4760 generic.go:334] "Generic (PLEG): container finished" podID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerID="a795b021f6b2c601a1876606c28f84f884bb713addb4f83541392946663ef52b" exitCode=0 Dec 27 06:09:36 crc kubenswrapper[4760]: I1227 06:09:36.762389 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerDied","Data":"a795b021f6b2c601a1876606c28f84f884bb713addb4f83541392946663ef52b"} Dec 27 06:09:37 crc kubenswrapper[4760]: I1227 06:09:37.774208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerStarted","Data":"46149524a23b215876622b4c7794dd2ec2a005321e3406a410b6f217f585706e"} Dec 27 06:09:43 crc kubenswrapper[4760]: I1227 06:09:43.110530 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:43 crc kubenswrapper[4760]: I1227 06:09:43.110889 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:43 crc kubenswrapper[4760]: I1227 06:09:43.162836 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:43 crc kubenswrapper[4760]: I1227 06:09:43.181020 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4jcf2" podStartSLOduration=8.777963449 podStartE2EDuration="11.180999589s" podCreationTimestamp="2025-12-27 06:09:32 +0000 UTC" firstStartedPulling="2025-12-27 06:09:34.740541936 +0000 UTC m=+1497.500611251" lastFinishedPulling="2025-12-27 06:09:37.143578066 +0000 UTC m=+1499.903647391" observedRunningTime="2025-12-27 06:09:37.808346378 +0000 UTC m=+1500.568415703" watchObservedRunningTime="2025-12-27 06:09:43.180999589 +0000 UTC m=+1505.941068914" Dec 27 06:09:43 crc kubenswrapper[4760]: I1227 06:09:43.893898 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:43 crc kubenswrapper[4760]: I1227 06:09:43.956930 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jcf2"] Dec 27 06:09:45 crc kubenswrapper[4760]: I1227 06:09:45.858417 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4jcf2" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="registry-server" containerID="cri-o://46149524a23b215876622b4c7794dd2ec2a005321e3406a410b6f217f585706e" gracePeriod=2 Dec 27 06:09:46 crc kubenswrapper[4760]: I1227 06:09:46.869773 4760 generic.go:334] "Generic (PLEG): container finished" podID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerID="46149524a23b215876622b4c7794dd2ec2a005321e3406a410b6f217f585706e" exitCode=0 Dec 27 06:09:46 crc kubenswrapper[4760]: I1227 06:09:46.869883 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerDied","Data":"46149524a23b215876622b4c7794dd2ec2a005321e3406a410b6f217f585706e"} Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.779172 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.879859 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jcf2" event={"ID":"efa5ec93-c0c3-4be9-88b7-0c792e05437c","Type":"ContainerDied","Data":"5c1e260eb6ae87d26c5c6b660453221e0e3d88cc680b09b811461c15247a66ee"} Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.879922 4760 scope.go:117] "RemoveContainer" containerID="46149524a23b215876622b4c7794dd2ec2a005321e3406a410b6f217f585706e" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.879937 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jcf2" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.901151 4760 scope.go:117] "RemoveContainer" containerID="a795b021f6b2c601a1876606c28f84f884bb713addb4f83541392946663ef52b" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.917373 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqfb\" (UniqueName: \"kubernetes.io/projected/efa5ec93-c0c3-4be9-88b7-0c792e05437c-kube-api-access-rmqfb\") pod \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.917445 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-utilities\") pod \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.917678 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-catalog-content\") pod \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\" (UID: \"efa5ec93-c0c3-4be9-88b7-0c792e05437c\") " Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.919881 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-utilities" (OuterVolumeSpecName: "utilities") pod "efa5ec93-c0c3-4be9-88b7-0c792e05437c" (UID: "efa5ec93-c0c3-4be9-88b7-0c792e05437c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.923892 4760 scope.go:117] "RemoveContainer" containerID="b45888917194b074ab7beb4a93fc8dee27f62d91db7bc46e394669461c858ceb" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.926028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa5ec93-c0c3-4be9-88b7-0c792e05437c-kube-api-access-rmqfb" (OuterVolumeSpecName: "kube-api-access-rmqfb") pod "efa5ec93-c0c3-4be9-88b7-0c792e05437c" (UID: "efa5ec93-c0c3-4be9-88b7-0c792e05437c"). InnerVolumeSpecName "kube-api-access-rmqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:09:47 crc kubenswrapper[4760]: I1227 06:09:47.954950 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa5ec93-c0c3-4be9-88b7-0c792e05437c" (UID: "efa5ec93-c0c3-4be9-88b7-0c792e05437c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:09:48 crc kubenswrapper[4760]: I1227 06:09:48.018996 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqfb\" (UniqueName: \"kubernetes.io/projected/efa5ec93-c0c3-4be9-88b7-0c792e05437c-kube-api-access-rmqfb\") on node \"crc\" DevicePath \"\"" Dec 27 06:09:48 crc kubenswrapper[4760]: I1227 06:09:48.019032 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 06:09:48 crc kubenswrapper[4760]: I1227 06:09:48.019043 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa5ec93-c0c3-4be9-88b7-0c792e05437c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 06:09:48 crc kubenswrapper[4760]: I1227 06:09:48.217277 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jcf2"] Dec 27 06:09:48 crc kubenswrapper[4760]: I1227 06:09:48.225672 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jcf2"] Dec 27 06:09:49 crc kubenswrapper[4760]: I1227 06:09:49.511558 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" path="/var/lib/kubelet/pods/efa5ec93-c0c3-4be9-88b7-0c792e05437c/volumes" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.722984 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7sch"] Dec 27 06:10:55 crc kubenswrapper[4760]: E1227 06:10:55.723983 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="extract-content" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.724003 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="extract-content" Dec 27 06:10:55 crc kubenswrapper[4760]: E1227 06:10:55.724016 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="registry-server" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.724026 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="registry-server" Dec 27 06:10:55 crc kubenswrapper[4760]: E1227 06:10:55.724066 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="extract-utilities" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.724075 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="extract-utilities" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.724291 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa5ec93-c0c3-4be9-88b7-0c792e05437c" containerName="registry-server" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.725565 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.740419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7sch"] Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.800903 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-catalog-content\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.801019 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw4dp\" (UniqueName: \"kubernetes.io/projected/e16fa3b7-5eb0-4690-8603-53488816ac8d-kube-api-access-rw4dp\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.801111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-utilities\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.902678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw4dp\" (UniqueName: \"kubernetes.io/projected/e16fa3b7-5eb0-4690-8603-53488816ac8d-kube-api-access-rw4dp\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.902745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-utilities\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.902792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-catalog-content\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.903647 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-utilities\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.903653 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-catalog-content\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:55 crc kubenswrapper[4760]: I1227 06:10:55.925059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw4dp\" (UniqueName: \"kubernetes.io/projected/e16fa3b7-5eb0-4690-8603-53488816ac8d-kube-api-access-rw4dp\") pod \"certified-operators-r7sch\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:56 crc kubenswrapper[4760]: I1227 06:10:56.091460 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:10:56 crc kubenswrapper[4760]: I1227 06:10:56.611707 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7sch"] Dec 27 06:10:57 crc kubenswrapper[4760]: I1227 06:10:57.438634 4760 generic.go:334] "Generic (PLEG): container finished" podID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerID="9e97eb5f1c6457fd283beec5789e9c91ae03380d1ae53f43c4f852f251c340bd" exitCode=0 Dec 27 06:10:57 crc kubenswrapper[4760]: I1227 06:10:57.438711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerDied","Data":"9e97eb5f1c6457fd283beec5789e9c91ae03380d1ae53f43c4f852f251c340bd"} Dec 27 06:10:57 crc kubenswrapper[4760]: I1227 06:10:57.441472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerStarted","Data":"3fc20d748ca9adb036a1144ef3758e168c5887c3d71973a190b4a728ca66f54d"} Dec 27 06:10:59 crc kubenswrapper[4760]: I1227 06:10:59.458612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerStarted","Data":"49fcca04e85f4edb4a513060c43e08f0757c4c7966372e7437977b73d5a4c29e"} Dec 27 06:11:00 crc kubenswrapper[4760]: I1227 06:11:00.467232 4760 generic.go:334] "Generic (PLEG): container finished" podID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerID="49fcca04e85f4edb4a513060c43e08f0757c4c7966372e7437977b73d5a4c29e" exitCode=0 Dec 27 06:11:00 crc kubenswrapper[4760]: I1227 06:11:00.467273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerDied","Data":"49fcca04e85f4edb4a513060c43e08f0757c4c7966372e7437977b73d5a4c29e"} Dec 27 06:11:02 crc kubenswrapper[4760]: I1227 06:11:02.484490 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerStarted","Data":"f70ec6b5953177fc3a0c387c10f50838f631e72406813719414303d23921d74b"} Dec 27 06:11:02 crc kubenswrapper[4760]: I1227 06:11:02.512408 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7sch" podStartSLOduration=3.659071524 podStartE2EDuration="7.512388683s" podCreationTimestamp="2025-12-27 06:10:55 +0000 UTC" firstStartedPulling="2025-12-27 06:10:57.440171762 +0000 UTC m=+1580.200241077" lastFinishedPulling="2025-12-27 06:11:01.293488921 +0000 UTC m=+1584.053558236" observedRunningTime="2025-12-27 06:11:02.510957649 +0000 UTC m=+1585.271026994" watchObservedRunningTime="2025-12-27 06:11:02.512388683 +0000 UTC m=+1585.272457998" Dec 27 06:11:05 crc kubenswrapper[4760]: I1227 06:11:05.288016 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:11:05 crc kubenswrapper[4760]: I1227 06:11:05.289249 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:11:06 crc kubenswrapper[4760]: I1227 06:11:06.092041 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:11:06 crc kubenswrapper[4760]: I1227 06:11:06.092112 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:11:06 crc kubenswrapper[4760]: I1227 06:11:06.149155 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:11:06 crc kubenswrapper[4760]: I1227 06:11:06.587364 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:11:06 crc kubenswrapper[4760]: I1227 06:11:06.663410 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7sch"] Dec 27 06:11:08 crc kubenswrapper[4760]: I1227 06:11:08.537184 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7sch" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="registry-server" containerID="cri-o://f70ec6b5953177fc3a0c387c10f50838f631e72406813719414303d23921d74b" gracePeriod=2 Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.554616 4760 generic.go:334] "Generic (PLEG): container finished" podID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerID="f70ec6b5953177fc3a0c387c10f50838f631e72406813719414303d23921d74b" exitCode=0 Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.554708 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerDied","Data":"f70ec6b5953177fc3a0c387c10f50838f631e72406813719414303d23921d74b"} Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.806199 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.949564 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-catalog-content\") pod \"e16fa3b7-5eb0-4690-8603-53488816ac8d\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.949853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-utilities\") pod \"e16fa3b7-5eb0-4690-8603-53488816ac8d\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.949912 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw4dp\" (UniqueName: \"kubernetes.io/projected/e16fa3b7-5eb0-4690-8603-53488816ac8d-kube-api-access-rw4dp\") pod \"e16fa3b7-5eb0-4690-8603-53488816ac8d\" (UID: \"e16fa3b7-5eb0-4690-8603-53488816ac8d\") " Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.951621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-utilities" (OuterVolumeSpecName: "utilities") pod "e16fa3b7-5eb0-4690-8603-53488816ac8d" (UID: "e16fa3b7-5eb0-4690-8603-53488816ac8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.955379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16fa3b7-5eb0-4690-8603-53488816ac8d-kube-api-access-rw4dp" (OuterVolumeSpecName: "kube-api-access-rw4dp") pod "e16fa3b7-5eb0-4690-8603-53488816ac8d" (UID: "e16fa3b7-5eb0-4690-8603-53488816ac8d"). InnerVolumeSpecName "kube-api-access-rw4dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:11:10 crc kubenswrapper[4760]: I1227 06:11:10.997742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16fa3b7-5eb0-4690-8603-53488816ac8d" (UID: "e16fa3b7-5eb0-4690-8603-53488816ac8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.051374 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.051414 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16fa3b7-5eb0-4690-8603-53488816ac8d-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.051430 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw4dp\" (UniqueName: \"kubernetes.io/projected/e16fa3b7-5eb0-4690-8603-53488816ac8d-kube-api-access-rw4dp\") on node \"crc\" DevicePath \"\"" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.564631 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7sch" event={"ID":"e16fa3b7-5eb0-4690-8603-53488816ac8d","Type":"ContainerDied","Data":"3fc20d748ca9adb036a1144ef3758e168c5887c3d71973a190b4a728ca66f54d"} Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.564685 4760 scope.go:117] "RemoveContainer" containerID="f70ec6b5953177fc3a0c387c10f50838f631e72406813719414303d23921d74b" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.564714 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7sch" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.582779 4760 scope.go:117] "RemoveContainer" containerID="49fcca04e85f4edb4a513060c43e08f0757c4c7966372e7437977b73d5a4c29e" Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.594504 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7sch"] Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.604056 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7sch"] Dec 27 06:11:11 crc kubenswrapper[4760]: I1227 06:11:11.606307 4760 scope.go:117] "RemoveContainer" containerID="9e97eb5f1c6457fd283beec5789e9c91ae03380d1ae53f43c4f852f251c340bd" Dec 27 06:11:13 crc kubenswrapper[4760]: I1227 06:11:13.512770 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" path="/var/lib/kubelet/pods/e16fa3b7-5eb0-4690-8603-53488816ac8d/volumes" Dec 27 06:11:35 crc kubenswrapper[4760]: I1227 06:11:35.287369 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:11:35 crc kubenswrapper[4760]: I1227 06:11:35.288140 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:11:44 crc kubenswrapper[4760]: I1227 06:11:44.212828 4760 scope.go:117] "RemoveContainer" containerID="b9e2be0ab7f8e9aca44766102a11a287fc3c579767dd4530d7b2fd86abad3223" Dec 27 06:12:05 crc kubenswrapper[4760]: I1227 06:12:05.287530 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:12:05 crc kubenswrapper[4760]: I1227 06:12:05.288384 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:12:05 crc kubenswrapper[4760]: I1227 06:12:05.288480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 06:12:05 crc kubenswrapper[4760]: I1227 06:12:05.289740 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 06:12:05 crc kubenswrapper[4760]: I1227 06:12:05.289875 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" gracePeriod=600 Dec 27 06:12:05 crc kubenswrapper[4760]: E1227 06:12:05.422485 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:12:06 crc kubenswrapper[4760]: I1227 06:12:06.040748 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" exitCode=0 Dec 27 06:12:06 crc kubenswrapper[4760]: I1227 06:12:06.040809 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717"} Dec 27 06:12:06 crc kubenswrapper[4760]: I1227 06:12:06.040856 4760 scope.go:117] "RemoveContainer" containerID="560a955296849a499da3938ec04851b01ba39103dd145de7716581a5c91fbb44" Dec 27 06:12:06 crc kubenswrapper[4760]: I1227 06:12:06.041702 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:12:06 crc kubenswrapper[4760]: E1227 06:12:06.042240 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:12:16 crc kubenswrapper[4760]: I1227 06:12:16.503519 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:12:16 crc kubenswrapper[4760]: E1227 06:12:16.504384 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:12:18 crc kubenswrapper[4760]: I1227 06:12:18.921805 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-60ab-account-create-update-pv4t6_cc9402b9-fedb-4d22-b889-92209fd2cf4b/mariadb-account-create-update/0.log" Dec 27 06:12:19 crc kubenswrapper[4760]: I1227 06:12:19.436623 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d4784c744-hzvm6_61704514-50fd-411e-8fb5-93bcf85fc4df/keystone-api/0.log" Dec 27 06:12:19 crc kubenswrapper[4760]: I1227 06:12:19.991912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-pws6r_9932da50-7e9f-48e5-a9a0-236cc084abb7/keystone-bootstrap/0.log" Dec 27 06:12:20 crc kubenswrapper[4760]: I1227 06:12:20.521008 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-zdmqk_2efe4afd-0932-4485-9215-08b34620744e/mariadb-database-create/0.log" Dec 27 06:12:21 crc kubenswrapper[4760]: I1227 06:12:21.056571 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-2t87g_6d335d87-9e3c-4826-bea2-ec9884fde6e0/keystone-db-sync/0.log" Dec 27 06:12:21 crc kubenswrapper[4760]: I1227 06:12:21.795372 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_6fc09772-cca9-4ff6-88ae-b66171f0745f/memcached/0.log" Dec 27 06:12:22 crc kubenswrapper[4760]: I1227 06:12:22.324394 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_5afbca79-d46f-48e9-82f8-2f676c4c7960/galera/0.log" Dec 27 06:12:22 crc kubenswrapper[4760]: I1227 06:12:22.838423 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_ed93a46f-1df6-4144-8487-08764749423a/galera/0.log" Dec 27 06:12:23 crc kubenswrapper[4760]: I1227 06:12:23.351161 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_1d85013d-1d0a-4d2a-8322-7fc10a3745b7/openstackclient/0.log" Dec 27 06:12:23 crc kubenswrapper[4760]: I1227 06:12:23.864270 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-84867d49bd-zw5cr_14954fd5-cd52-4027-a904-27bfb69d6c6d/placement-log/0.log" Dec 27 06:12:24 crc kubenswrapper[4760]: I1227 06:12:24.399987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-86ae-account-create-update-9z6mb_2522ef06-2d40-417c-9e4c-631cecbe0b25/mariadb-account-create-update/0.log" Dec 27 06:12:24 crc kubenswrapper[4760]: I1227 06:12:24.887824 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-fxnxx_85a284e9-7716-4926-9cdd-fb2bab2edba2/mariadb-database-create/0.log" Dec 27 06:12:25 crc kubenswrapper[4760]: I1227 06:12:25.319006 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-gjkm5_c288547a-4460-45d1-aad0-c311b34e2a6c/placement-db-sync/0.log" Dec 27 06:12:25 crc kubenswrapper[4760]: I1227 06:12:25.778684 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_9c61bc46-c657-4109-ae26-5f2d02fcce40/rabbitmq/0.log" Dec 27 06:12:26 crc kubenswrapper[4760]: I1227 06:12:26.250289 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_51ff80ae-27ca-4914-8c80-008f6d2d0860/rabbitmq/0.log" Dec 27 06:12:26 crc kubenswrapper[4760]: I1227 06:12:26.690627 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_88c1643d-b9a1-49ac-aff2-39bff3918b3e/rabbitmq/0.log" Dec 27 06:12:27 crc kubenswrapper[4760]: I1227 06:12:27.091958 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-7r7jq_3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af/mariadb-account-create-update/0.log" Dec 27 06:12:28 crc kubenswrapper[4760]: I1227 06:12:28.502939 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:12:28 crc kubenswrapper[4760]: E1227 06:12:28.503740 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:12:42 crc kubenswrapper[4760]: I1227 06:12:42.502906 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:12:42 crc kubenswrapper[4760]: E1227 06:12:42.504061 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:12:44 crc kubenswrapper[4760]: I1227 06:12:44.288473 4760 scope.go:117] "RemoveContainer" containerID="1c79392ca1ae462f2a518acfe4683c290bbc8c87d2a1ae253711e325d8d9dc0a" Dec 27 06:12:57 crc kubenswrapper[4760]: I1227 06:12:57.515347 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:12:57 crc kubenswrapper[4760]: E1227 06:12:57.516416 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:12:58 crc kubenswrapper[4760]: I1227 06:12:58.464692 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-568d76f566-slgbn_73020260-4c3a-4bd2-8749-58d052d076e3/manager/0.log" Dec 27 06:12:58 crc kubenswrapper[4760]: I1227 06:12:58.865515 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/extract/0.log" Dec 27 06:12:59 crc kubenswrapper[4760]: I1227 06:12:59.298377 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f98b4754f-km8pb_ea3b319d-0327-447a-a3fb-b872f98c5e99/manager/0.log" Dec 27 06:12:59 crc kubenswrapper[4760]: I1227 06:12:59.832803 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/extract/0.log" Dec 27 06:13:00 crc kubenswrapper[4760]: I1227 06:13:00.295016 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-ggw2m_0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2/manager/0.log" Dec 27 06:13:00 crc kubenswrapper[4760]: I1227 06:13:00.725476 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-688f464774-5hjlr_177b0c02-1f5b-4315-b854-5465123ebcab/manager/0.log" Dec 27 06:13:01 crc kubenswrapper[4760]: I1227 06:13:01.145864 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-4fmwm_11fdabd0-f272-435e-86cf-a3fe7343eb0f/manager/0.log" Dec 27 06:13:01 crc kubenswrapper[4760]: I1227 06:13:01.595170 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-q4p4p_d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d/manager/0.log" Dec 27 06:13:02 crc kubenswrapper[4760]: I1227 06:13:02.124264 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-vb95d_f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8/manager/0.log" Dec 27 06:13:02 crc kubenswrapper[4760]: I1227 06:13:02.734698 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-kss4t_fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2/manager/0.log" Dec 27 06:13:03 crc kubenswrapper[4760]: I1227 06:13:03.223777 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-q874z_0d86a798-bc69-4423-96ce-dc1b4fd03bc8/manager/0.log" Dec 27 06:13:03 crc kubenswrapper[4760]: I1227 06:13:03.656404 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5fdd9786f7-674bs_160a8b25-a031-4204-870f-2385ceaaf80e/manager/0.log" Dec 27 06:13:04 crc kubenswrapper[4760]: I1227 06:13:04.105964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6d59c96c98-ksz65_0bc672f2-6342-4276-a266-c6bbd7f1896c/manager/0.log" Dec 27 06:13:04 crc kubenswrapper[4760]: I1227 06:13:04.541859 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-2rdv8_4de77998-df1d-4b52-8bbe-3cb9b35356fd/manager/0.log" Dec 27 06:13:04 crc kubenswrapper[4760]: I1227 06:13:04.965753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7fd66c86cd-9rkmn_90731f81-cc83-40ff-af06-25f6ea776753/manager/0.log" Dec 27 06:13:05 crc kubenswrapper[4760]: I1227 06:13:05.374989 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-jt5kk_2ac71051-138e-44fa-a101-d00bd0811942/registry-server/0.log" Dec 27 06:13:05 crc kubenswrapper[4760]: I1227 06:13:05.819777 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-zjrv7_733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1/manager/0.log" Dec 27 06:13:06 crc kubenswrapper[4760]: I1227 06:13:06.274499 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz_ed40f1e4-9823-4b11-b48e-4f4019a3796c/manager/0.log" Dec 27 06:13:06 crc kubenswrapper[4760]: I1227 06:13:06.939410 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66d6f5c46d-cznpr_b0d5f45a-ddba-4d84-b567-17193cfaef2b/manager/0.log" Dec 27 06:13:07 crc kubenswrapper[4760]: I1227 06:13:07.368771 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8ml8h_a55331c6-2f6f-4a43-b6e0-69e5be40f28c/registry-server/0.log" Dec 27 06:13:07 crc kubenswrapper[4760]: I1227 06:13:07.795916 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-7gh59_e4d879d7-4ae6-4499-89ed-98f48e4f0541/manager/0.log" Dec 27 06:13:08 crc kubenswrapper[4760]: I1227 06:13:08.305243 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7fdbb74498-gnjnq_d8cc7b7c-b6b1-4eab-b0bb-616b227dc790/manager/0.log" Dec 27 06:13:08 crc kubenswrapper[4760]: I1227 06:13:08.756493 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ps2r8_4aa3ec72-b438-4ef7-a213-0b6148aed51b/operator/0.log" Dec 27 06:13:09 crc kubenswrapper[4760]: I1227 06:13:09.175375 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-g4z7d_6c7e09e0-9c92-40fa-99e1-b510ab43fb39/manager/0.log" Dec 27 06:13:09 crc kubenswrapper[4760]: I1227 06:13:09.652218 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fbb89d79f-gwngw_3250a06b-b268-405c-b891-7657e4818fe8/manager/0.log" Dec 27 06:13:10 crc kubenswrapper[4760]: I1227 06:13:10.077794 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74f84d69b6-vvjqd_f7aa22fd-6d4a-484b-9814-3e8e8766a423/manager/0.log" Dec 27 06:13:10 crc kubenswrapper[4760]: I1227 06:13:10.522072 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-57d64f56b7-g5mgk_f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf/manager/0.log" Dec 27 06:13:11 crc kubenswrapper[4760]: I1227 06:13:11.503555 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:13:11 crc kubenswrapper[4760]: E1227 06:13:11.503956 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:13:15 crc kubenswrapper[4760]: I1227 06:13:15.482924 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-60ab-account-create-update-pv4t6_cc9402b9-fedb-4d22-b889-92209fd2cf4b/mariadb-account-create-update/0.log" Dec 27 06:13:16 crc kubenswrapper[4760]: I1227 06:13:16.031199 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d4784c744-hzvm6_61704514-50fd-411e-8fb5-93bcf85fc4df/keystone-api/0.log" Dec 27 06:13:16 crc kubenswrapper[4760]: I1227 06:13:16.557557 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-bootstrap-pws6r_9932da50-7e9f-48e5-a9a0-236cc084abb7/keystone-bootstrap/0.log" Dec 27 06:13:17 crc kubenswrapper[4760]: I1227 06:13:17.077520 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-create-zdmqk_2efe4afd-0932-4485-9215-08b34620744e/mariadb-database-create/0.log" Dec 27 06:13:17 crc kubenswrapper[4760]: I1227 06:13:17.633689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-db-sync-2t87g_6d335d87-9e3c-4826-bea2-ec9884fde6e0/keystone-db-sync/0.log" Dec 27 06:13:18 crc kubenswrapper[4760]: I1227 06:13:18.315154 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_6fc09772-cca9-4ff6-88ae-b66171f0745f/memcached/0.log" Dec 27 06:13:18 crc kubenswrapper[4760]: I1227 06:13:18.826884 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_5afbca79-d46f-48e9-82f8-2f676c4c7960/galera/0.log" Dec 27 06:13:19 crc kubenswrapper[4760]: I1227 06:13:19.386581 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_ed93a46f-1df6-4144-8487-08764749423a/galera/0.log" Dec 27 06:13:19 crc kubenswrapper[4760]: I1227 06:13:19.910697 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_1d85013d-1d0a-4d2a-8322-7fc10a3745b7/openstackclient/0.log" Dec 27 06:13:20 crc kubenswrapper[4760]: I1227 06:13:20.450875 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-84867d49bd-zw5cr_14954fd5-cd52-4027-a904-27bfb69d6c6d/placement-log/0.log" Dec 27 06:13:20 crc kubenswrapper[4760]: I1227 06:13:20.985066 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-86ae-account-create-update-9z6mb_2522ef06-2d40-417c-9e4c-631cecbe0b25/mariadb-account-create-update/0.log" Dec 27 06:13:21 crc kubenswrapper[4760]: I1227 06:13:21.475233 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-create-fxnxx_85a284e9-7716-4926-9cdd-fb2bab2edba2/mariadb-database-create/0.log" Dec 27 06:13:21 crc kubenswrapper[4760]: I1227 06:13:21.901115 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-db-sync-gjkm5_c288547a-4460-45d1-aad0-c311b34e2a6c/placement-db-sync/0.log" Dec 27 06:13:22 crc kubenswrapper[4760]: I1227 06:13:22.325568 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_9c61bc46-c657-4109-ae26-5f2d02fcce40/rabbitmq/0.log" Dec 27 06:13:22 crc kubenswrapper[4760]: I1227 06:13:22.783877 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_51ff80ae-27ca-4914-8c80-008f6d2d0860/rabbitmq/0.log" Dec 27 06:13:23 crc kubenswrapper[4760]: I1227 06:13:23.216219 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_88c1643d-b9a1-49ac-aff2-39bff3918b3e/rabbitmq/0.log" Dec 27 06:13:23 crc kubenswrapper[4760]: I1227 06:13:23.727992 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_root-account-create-update-7r7jq_3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af/mariadb-account-create-update/0.log" Dec 27 06:13:26 crc kubenswrapper[4760]: I1227 06:13:26.503220 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:13:26 crc kubenswrapper[4760]: E1227 06:13:26.503549 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:13:39 crc kubenswrapper[4760]: I1227 06:13:39.502749 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:13:39 crc kubenswrapper[4760]: E1227 06:13:39.503649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:13:50 crc kubenswrapper[4760]: I1227 06:13:50.502553 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:13:50 crc kubenswrapper[4760]: E1227 06:13:50.503727 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:13:54 crc kubenswrapper[4760]: I1227 06:13:54.206007 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-568d76f566-slgbn_73020260-4c3a-4bd2-8749-58d052d076e3/manager/0.log" Dec 27 06:13:54 crc kubenswrapper[4760]: I1227 06:13:54.615892 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/extract/0.log" Dec 27 06:13:55 crc kubenswrapper[4760]: I1227 06:13:55.066348 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f98b4754f-km8pb_ea3b319d-0327-447a-a3fb-b872f98c5e99/manager/0.log" Dec 27 06:13:55 crc kubenswrapper[4760]: I1227 06:13:55.498722 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/extract/0.log" Dec 27 06:13:55 crc kubenswrapper[4760]: I1227 06:13:55.906564 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-ggw2m_0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2/manager/0.log" Dec 27 06:13:56 crc kubenswrapper[4760]: I1227 06:13:56.304888 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-688f464774-5hjlr_177b0c02-1f5b-4315-b854-5465123ebcab/manager/0.log" Dec 27 06:13:56 crc kubenswrapper[4760]: I1227 06:13:56.739024 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-4fmwm_11fdabd0-f272-435e-86cf-a3fe7343eb0f/manager/0.log" Dec 27 06:13:57 crc kubenswrapper[4760]: I1227 06:13:57.178057 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-q4p4p_d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d/manager/0.log" Dec 27 06:13:57 crc kubenswrapper[4760]: I1227 06:13:57.640027 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-vb95d_f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8/manager/0.log" Dec 27 06:13:58 crc kubenswrapper[4760]: I1227 06:13:58.119220 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-kss4t_fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2/manager/0.log" Dec 27 06:13:58 crc kubenswrapper[4760]: I1227 06:13:58.622671 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-q874z_0d86a798-bc69-4423-96ce-dc1b4fd03bc8/manager/0.log" Dec 27 06:13:59 crc kubenswrapper[4760]: I1227 06:13:59.105500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5fdd9786f7-674bs_160a8b25-a031-4204-870f-2385ceaaf80e/manager/0.log" Dec 27 06:13:59 crc kubenswrapper[4760]: I1227 06:13:59.594899 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6d59c96c98-ksz65_0bc672f2-6342-4276-a266-c6bbd7f1896c/manager/0.log" Dec 27 06:14:00 crc kubenswrapper[4760]: I1227 06:14:00.091195 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-2rdv8_4de77998-df1d-4b52-8bbe-3cb9b35356fd/manager/0.log" Dec 27 06:14:00 crc kubenswrapper[4760]: I1227 06:14:00.549532 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7fd66c86cd-9rkmn_90731f81-cc83-40ff-af06-25f6ea776753/manager/0.log" Dec 27 06:14:01 crc kubenswrapper[4760]: I1227 06:14:01.008418 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-jt5kk_2ac71051-138e-44fa-a101-d00bd0811942/registry-server/0.log" Dec 27 06:14:01 crc kubenswrapper[4760]: I1227 06:14:01.494852 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-zjrv7_733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1/manager/0.log" Dec 27 06:14:01 crc kubenswrapper[4760]: I1227 06:14:01.503343 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:14:01 crc kubenswrapper[4760]: E1227 06:14:01.503761 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:14:01 crc kubenswrapper[4760]: I1227 06:14:01.935554 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz_ed40f1e4-9823-4b11-b48e-4f4019a3796c/manager/0.log" Dec 27 06:14:02 crc kubenswrapper[4760]: I1227 06:14:02.541257 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66d6f5c46d-cznpr_b0d5f45a-ddba-4d84-b567-17193cfaef2b/manager/0.log" Dec 27 06:14:03 crc kubenswrapper[4760]: I1227 06:14:03.046523 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8ml8h_a55331c6-2f6f-4a43-b6e0-69e5be40f28c/registry-server/0.log" Dec 27 06:14:03 crc kubenswrapper[4760]: I1227 06:14:03.531960 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-7gh59_e4d879d7-4ae6-4499-89ed-98f48e4f0541/manager/0.log" Dec 27 06:14:03 crc kubenswrapper[4760]: I1227 06:14:03.966403 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7fdbb74498-gnjnq_d8cc7b7c-b6b1-4eab-b0bb-616b227dc790/manager/0.log" Dec 27 06:14:04 crc kubenswrapper[4760]: I1227 06:14:04.428544 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ps2r8_4aa3ec72-b438-4ef7-a213-0b6148aed51b/operator/0.log" Dec 27 06:14:04 crc kubenswrapper[4760]: I1227 06:14:04.855984 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-g4z7d_6c7e09e0-9c92-40fa-99e1-b510ab43fb39/manager/0.log" Dec 27 06:14:05 crc kubenswrapper[4760]: I1227 06:14:05.299540 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fbb89d79f-gwngw_3250a06b-b268-405c-b891-7657e4818fe8/manager/0.log" Dec 27 06:14:05 crc kubenswrapper[4760]: I1227 06:14:05.745499 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74f84d69b6-vvjqd_f7aa22fd-6d4a-484b-9814-3e8e8766a423/manager/0.log" Dec 27 06:14:06 crc kubenswrapper[4760]: I1227 06:14:06.165225 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-57d64f56b7-g5mgk_f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf/manager/0.log" Dec 27 06:14:14 crc kubenswrapper[4760]: I1227 06:14:14.503845 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:14:14 crc kubenswrapper[4760]: E1227 06:14:14.505211 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.509316 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:14:27 crc kubenswrapper[4760]: E1227 06:14:27.510154 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.799739 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kpjcg/must-gather-sqm6s"] Dec 27 06:14:27 crc kubenswrapper[4760]: E1227 06:14:27.800414 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="extract-content" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.800427 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="extract-content" Dec 27 06:14:27 crc kubenswrapper[4760]: E1227 06:14:27.800435 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="extract-utilities" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.800442 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="extract-utilities" Dec 27 06:14:27 crc kubenswrapper[4760]: E1227 06:14:27.800471 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="registry-server" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.800478 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="registry-server" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.809287 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16fa3b7-5eb0-4690-8603-53488816ac8d" containerName="registry-server" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.810166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.813485 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kpjcg"/"openshift-service-ca.crt" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.813501 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kpjcg"/"kube-root-ca.crt" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.816366 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kpjcg"/"default-dockercfg-thktg" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.820918 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kpjcg/must-gather-sqm6s"] Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.883821 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42ba0ed-110f-4302-a1b5-6988339a1039-must-gather-output\") pod \"must-gather-sqm6s\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.884152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctg9\" (UniqueName: \"kubernetes.io/projected/a42ba0ed-110f-4302-a1b5-6988339a1039-kube-api-access-qctg9\") pod \"must-gather-sqm6s\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.985808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42ba0ed-110f-4302-a1b5-6988339a1039-must-gather-output\") pod \"must-gather-sqm6s\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.985912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctg9\" (UniqueName: \"kubernetes.io/projected/a42ba0ed-110f-4302-a1b5-6988339a1039-kube-api-access-qctg9\") pod \"must-gather-sqm6s\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:27 crc kubenswrapper[4760]: I1227 06:14:27.986275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42ba0ed-110f-4302-a1b5-6988339a1039-must-gather-output\") pod \"must-gather-sqm6s\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:28 crc kubenswrapper[4760]: I1227 06:14:28.002991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctg9\" (UniqueName: \"kubernetes.io/projected/a42ba0ed-110f-4302-a1b5-6988339a1039-kube-api-access-qctg9\") pod \"must-gather-sqm6s\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:28 crc kubenswrapper[4760]: I1227 06:14:28.128703 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:14:28 crc kubenswrapper[4760]: I1227 06:14:28.385980 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kpjcg/must-gather-sqm6s"] Dec 27 06:14:28 crc kubenswrapper[4760]: W1227 06:14:28.415807 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda42ba0ed_110f_4302_a1b5_6988339a1039.slice/crio-33341e391ef8b04a0469ebbc22a8d9c4db243ba583a5c5ad4a5bc803b5eb9198 WatchSource:0}: Error finding container 33341e391ef8b04a0469ebbc22a8d9c4db243ba583a5c5ad4a5bc803b5eb9198: Status 404 returned error can't find the container with id 33341e391ef8b04a0469ebbc22a8d9c4db243ba583a5c5ad4a5bc803b5eb9198 Dec 27 06:14:28 crc kubenswrapper[4760]: I1227 06:14:28.419465 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 27 06:14:29 crc kubenswrapper[4760]: I1227 06:14:29.313332 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" event={"ID":"a42ba0ed-110f-4302-a1b5-6988339a1039","Type":"ContainerStarted","Data":"33341e391ef8b04a0469ebbc22a8d9c4db243ba583a5c5ad4a5bc803b5eb9198"} Dec 27 06:14:38 crc kubenswrapper[4760]: I1227 06:14:38.419469 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" event={"ID":"a42ba0ed-110f-4302-a1b5-6988339a1039","Type":"ContainerStarted","Data":"7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e"} Dec 27 06:14:38 crc kubenswrapper[4760]: I1227 06:14:38.420262 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" event={"ID":"a42ba0ed-110f-4302-a1b5-6988339a1039","Type":"ContainerStarted","Data":"2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e"} Dec 27 06:14:38 crc kubenswrapper[4760]: I1227 06:14:38.448726 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" podStartSLOduration=2.570206527 podStartE2EDuration="11.448700874s" podCreationTimestamp="2025-12-27 06:14:27 +0000 UTC" firstStartedPulling="2025-12-27 06:14:28.419188336 +0000 UTC m=+1791.179257661" lastFinishedPulling="2025-12-27 06:14:37.297682693 +0000 UTC m=+1800.057752008" observedRunningTime="2025-12-27 06:14:38.44035389 +0000 UTC m=+1801.200423215" watchObservedRunningTime="2025-12-27 06:14:38.448700874 +0000 UTC m=+1801.208770199" Dec 27 06:14:38 crc kubenswrapper[4760]: I1227 06:14:38.503510 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:14:38 crc kubenswrapper[4760]: E1227 06:14:38.503774 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:14:51 crc kubenswrapper[4760]: I1227 06:14:51.502009 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:14:51 crc kubenswrapper[4760]: E1227 06:14:51.502687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.144951 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d"] Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.146271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.154784 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.154794 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.166770 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d"] Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.318209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lh7\" (UniqueName: \"kubernetes.io/projected/5ced29d4-ea1d-466a-bd87-816ec47775ef-kube-api-access-m5lh7\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.318379 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ced29d4-ea1d-466a-bd87-816ec47775ef-secret-volume\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.318482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ced29d4-ea1d-466a-bd87-816ec47775ef-config-volume\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.420243 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ced29d4-ea1d-466a-bd87-816ec47775ef-config-volume\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.420479 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5lh7\" (UniqueName: \"kubernetes.io/projected/5ced29d4-ea1d-466a-bd87-816ec47775ef-kube-api-access-m5lh7\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.420547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ced29d4-ea1d-466a-bd87-816ec47775ef-secret-volume\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.421176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ced29d4-ea1d-466a-bd87-816ec47775ef-config-volume\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.429150 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ced29d4-ea1d-466a-bd87-816ec47775ef-secret-volume\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.445475 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5lh7\" (UniqueName: \"kubernetes.io/projected/5ced29d4-ea1d-466a-bd87-816ec47775ef-kube-api-access-m5lh7\") pod \"collect-profiles-29446935-7822d\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.478161 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:00 crc kubenswrapper[4760]: I1227 06:15:00.923637 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d"] Dec 27 06:15:00 crc kubenswrapper[4760]: W1227 06:15:00.944384 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ced29d4_ea1d_466a_bd87_816ec47775ef.slice/crio-452ef491aa7e7212913e755f30424c4fab728f52e81e559ba9be54db6633d842 WatchSource:0}: Error finding container 452ef491aa7e7212913e755f30424c4fab728f52e81e559ba9be54db6633d842: Status 404 returned error can't find the container with id 452ef491aa7e7212913e755f30424c4fab728f52e81e559ba9be54db6633d842 Dec 27 06:15:01 crc kubenswrapper[4760]: I1227 06:15:01.578869 4760 generic.go:334] "Generic (PLEG): container finished" podID="5ced29d4-ea1d-466a-bd87-816ec47775ef" containerID="c8aaa6a58f6468c7e09862824e2e155d23389348e5ef3da822fc3071fc526dac" exitCode=0 Dec 27 06:15:01 crc kubenswrapper[4760]: I1227 06:15:01.579027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" event={"ID":"5ced29d4-ea1d-466a-bd87-816ec47775ef","Type":"ContainerDied","Data":"c8aaa6a58f6468c7e09862824e2e155d23389348e5ef3da822fc3071fc526dac"} Dec 27 06:15:01 crc kubenswrapper[4760]: I1227 06:15:01.579126 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" event={"ID":"5ced29d4-ea1d-466a-bd87-816ec47775ef","Type":"ContainerStarted","Data":"452ef491aa7e7212913e755f30424c4fab728f52e81e559ba9be54db6633d842"} Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:02.921278 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.061215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ced29d4-ea1d-466a-bd87-816ec47775ef-config-volume\") pod \"5ced29d4-ea1d-466a-bd87-816ec47775ef\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.061291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ced29d4-ea1d-466a-bd87-816ec47775ef-secret-volume\") pod \"5ced29d4-ea1d-466a-bd87-816ec47775ef\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.061399 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lh7\" (UniqueName: \"kubernetes.io/projected/5ced29d4-ea1d-466a-bd87-816ec47775ef-kube-api-access-m5lh7\") pod \"5ced29d4-ea1d-466a-bd87-816ec47775ef\" (UID: \"5ced29d4-ea1d-466a-bd87-816ec47775ef\") " Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.062485 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ced29d4-ea1d-466a-bd87-816ec47775ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "5ced29d4-ea1d-466a-bd87-816ec47775ef" (UID: "5ced29d4-ea1d-466a-bd87-816ec47775ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.068796 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ced29d4-ea1d-466a-bd87-816ec47775ef-kube-api-access-m5lh7" (OuterVolumeSpecName: "kube-api-access-m5lh7") pod "5ced29d4-ea1d-466a-bd87-816ec47775ef" (UID: "5ced29d4-ea1d-466a-bd87-816ec47775ef"). InnerVolumeSpecName "kube-api-access-m5lh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.069979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ced29d4-ea1d-466a-bd87-816ec47775ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5ced29d4-ea1d-466a-bd87-816ec47775ef" (UID: "5ced29d4-ea1d-466a-bd87-816ec47775ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.163665 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ced29d4-ea1d-466a-bd87-816ec47775ef-config-volume\") on node \"crc\" DevicePath \"\"" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.163696 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ced29d4-ea1d-466a-bd87-816ec47775ef-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.163709 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5lh7\" (UniqueName: \"kubernetes.io/projected/5ced29d4-ea1d-466a-bd87-816ec47775ef-kube-api-access-m5lh7\") on node \"crc\" DevicePath \"\"" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.599334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" event={"ID":"5ced29d4-ea1d-466a-bd87-816ec47775ef","Type":"ContainerDied","Data":"452ef491aa7e7212913e755f30424c4fab728f52e81e559ba9be54db6633d842"} Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.599703 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452ef491aa7e7212913e755f30424c4fab728f52e81e559ba9be54db6633d842" Dec 27 06:15:03 crc kubenswrapper[4760]: I1227 06:15:03.599386 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29446935-7822d" Dec 27 06:15:04 crc kubenswrapper[4760]: I1227 06:15:04.502986 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:15:04 crc kubenswrapper[4760]: E1227 06:15:04.503628 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:15:10 crc kubenswrapper[4760]: I1227 06:15:10.050451 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-fxnxx"] Dec 27 06:15:10 crc kubenswrapper[4760]: I1227 06:15:10.055656 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-zdmqk"] Dec 27 06:15:10 crc kubenswrapper[4760]: I1227 06:15:10.060356 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-60ab-account-create-update-pv4t6"] Dec 27 06:15:10 crc kubenswrapper[4760]: I1227 06:15:10.068417 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-zdmqk"] Dec 27 06:15:10 crc kubenswrapper[4760]: I1227 06:15:10.073220 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-fxnxx"] Dec 27 06:15:10 crc kubenswrapper[4760]: I1227 06:15:10.077592 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-60ab-account-create-update-pv4t6"] Dec 27 06:15:11 crc kubenswrapper[4760]: I1227 06:15:11.029732 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-86ae-account-create-update-9z6mb"] Dec 27 06:15:11 crc kubenswrapper[4760]: I1227 06:15:11.040383 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-86ae-account-create-update-9z6mb"] Dec 27 06:15:11 crc kubenswrapper[4760]: I1227 06:15:11.512192 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2522ef06-2d40-417c-9e4c-631cecbe0b25" path="/var/lib/kubelet/pods/2522ef06-2d40-417c-9e4c-631cecbe0b25/volumes" Dec 27 06:15:11 crc kubenswrapper[4760]: I1227 06:15:11.512942 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2efe4afd-0932-4485-9215-08b34620744e" path="/var/lib/kubelet/pods/2efe4afd-0932-4485-9215-08b34620744e/volumes" Dec 27 06:15:11 crc kubenswrapper[4760]: I1227 06:15:11.513469 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a284e9-7716-4926-9cdd-fb2bab2edba2" path="/var/lib/kubelet/pods/85a284e9-7716-4926-9cdd-fb2bab2edba2/volumes" Dec 27 06:15:11 crc kubenswrapper[4760]: I1227 06:15:11.513946 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9402b9-fedb-4d22-b889-92209fd2cf4b" path="/var/lib/kubelet/pods/cc9402b9-fedb-4d22-b889-92209fd2cf4b/volumes" Dec 27 06:15:17 crc kubenswrapper[4760]: I1227 06:15:17.507138 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:15:17 crc kubenswrapper[4760]: E1227 06:15:17.507810 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:15:24 crc kubenswrapper[4760]: I1227 06:15:24.047327 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-7r7jq"] Dec 27 06:15:24 crc kubenswrapper[4760]: I1227 06:15:24.057386 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-7r7jq"] Dec 27 06:15:25 crc kubenswrapper[4760]: I1227 06:15:25.509891 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af" path="/var/lib/kubelet/pods/3c2ee88a-0bf4-4a5e-a34a-0bdd5d21c4af/volumes" Dec 27 06:15:29 crc kubenswrapper[4760]: I1227 06:15:29.503405 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:15:29 crc kubenswrapper[4760]: E1227 06:15:29.503902 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.074124 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-568d76f566-slgbn_73020260-4c3a-4bd2-8749-58d052d076e3/manager/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.186322 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/util/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.414872 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/util/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.422307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/pull/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.427408 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/pull/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.508824 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:15:43 crc kubenswrapper[4760]: E1227 06:15:43.509044 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.637154 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/util/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.674917 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/pull/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.677345 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c93c42eb303a3bf5039523ecb7206d1762e14215a919d744d6e20ccffa7l77f_d3ff586e-0b1a-4a96-8ede-cd81b34fbcbc/extract/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.828905 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f98b4754f-km8pb_ea3b319d-0327-447a-a3fb-b872f98c5e99/manager/0.log" Dec 27 06:15:43 crc kubenswrapper[4760]: I1227 06:15:43.866277 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/util/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.017721 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/util/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.087757 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/pull/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.099435 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/pull/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.305971 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/util/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.314608 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/pull/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.337709 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c53ec8390551a0896180e419a249a31c02acd9ac50f5740b840d5d108krlx_556d72b2-b316-46cf-9094-370351d21aee/extract/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.397216 4760 scope.go:117] "RemoveContainer" containerID="7aeddfae2b053160c986a148aa8cbde37e8f69dc02d1f5e1c6a57e0990d0f234" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.416490 4760 scope.go:117] "RemoveContainer" containerID="10d472f3947bb722c4f7116e0bab55693d9dd61ef581ce42daa938223d102c0b" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.443857 4760 scope.go:117] "RemoveContainer" containerID="b0c82b247eff07aaabb3b6db2034a7bd43212cbf4d38eeb5cc6aa951b6971081" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.475424 4760 scope.go:117] "RemoveContainer" containerID="09078a8635bc15b4d7fbcaab150efa7d3a2c9b655ac5adb62017320fda2a488c" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.503765 4760 scope.go:117] "RemoveContainer" containerID="0824ae8706d79f4b410bb8a4235876ab8444e5545529f87ee82d738f501bb12d" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.520713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-ggw2m_0dc73c70-a9ca-4cc4-a69a-21eaca2dd2c2/manager/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.604639 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-688f464774-5hjlr_177b0c02-1f5b-4315-b854-5465123ebcab/manager/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.735856 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-4fmwm_11fdabd0-f272-435e-86cf-a3fe7343eb0f/manager/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.826546 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-q4p4p_d0770d35-0c7d-4fe6-91f3-e42a6ada0c8d/manager/0.log" Dec 27 06:15:44 crc kubenswrapper[4760]: I1227 06:15:44.999146 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-vb95d_f6b0a3e5-c5dd-4b33-9262-6c8cc4a4ffd8/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.088694 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-kss4t_fe17c9a5-ed8f-4fd6-8b6b-e2e2107092a2/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.240455 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-q874z_0d86a798-bc69-4423-96ce-dc1b4fd03bc8/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.287507 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5fdd9786f7-674bs_160a8b25-a031-4204-870f-2385ceaaf80e/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.469808 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6d59c96c98-ksz65_0bc672f2-6342-4276-a266-c6bbd7f1896c/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.493879 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-2rdv8_4de77998-df1d-4b52-8bbe-3cb9b35356fd/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.709743 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7fd66c86cd-9rkmn_90731f81-cc83-40ff-af06-25f6ea776753/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.733328 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-jt5kk_2ac71051-138e-44fa-a101-d00bd0811942/registry-server/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.924202 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-zjrv7_733f56b3-c5b1-4f9c-a35c-bbde9c68ebf1/manager/0.log" Dec 27 06:15:45 crc kubenswrapper[4760]: I1227 06:15:45.984703 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7gwqwz_ed40f1e4-9823-4b11-b48e-4f4019a3796c/manager/0.log" Dec 27 06:15:46 crc kubenswrapper[4760]: I1227 06:15:46.174198 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8ml8h_a55331c6-2f6f-4a43-b6e0-69e5be40f28c/registry-server/0.log" Dec 27 06:15:46 crc kubenswrapper[4760]: I1227 06:15:46.308389 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66d6f5c46d-cznpr_b0d5f45a-ddba-4d84-b567-17193cfaef2b/manager/0.log" Dec 27 06:15:46 crc kubenswrapper[4760]: I1227 06:15:46.420483 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-7gh59_e4d879d7-4ae6-4499-89ed-98f48e4f0541/manager/0.log" Dec 27 06:15:46 crc kubenswrapper[4760]: I1227 06:15:46.536476 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7fdbb74498-gnjnq_d8cc7b7c-b6b1-4eab-b0bb-616b227dc790/manager/0.log" Dec 27 06:15:46 crc kubenswrapper[4760]: I1227 06:15:46.649131 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ps2r8_4aa3ec72-b438-4ef7-a213-0b6148aed51b/operator/0.log" Dec 27 06:15:46 crc kubenswrapper[4760]: I1227 06:15:46.744044 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-g4z7d_6c7e09e0-9c92-40fa-99e1-b510ab43fb39/manager/0.log" Dec 27 06:15:47 crc kubenswrapper[4760]: I1227 06:15:47.031753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fbb89d79f-gwngw_3250a06b-b268-405c-b891-7657e4818fe8/manager/0.log" Dec 27 06:15:47 crc kubenswrapper[4760]: I1227 06:15:47.146788 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-74f84d69b6-vvjqd_f7aa22fd-6d4a-484b-9814-3e8e8766a423/manager/0.log" Dec 27 06:15:47 crc kubenswrapper[4760]: I1227 06:15:47.276546 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-57d64f56b7-g5mgk_f5ce1921-f89d-4e7f-b7b8-93d6a2a284bf/manager/0.log" Dec 27 06:15:54 crc kubenswrapper[4760]: I1227 06:15:54.502075 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:15:54 crc kubenswrapper[4760]: E1227 06:15:54.503908 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:15:59 crc kubenswrapper[4760]: I1227 06:15:59.059479 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2t87g"] Dec 27 06:15:59 crc kubenswrapper[4760]: I1227 06:15:59.072307 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2t87g"] Dec 27 06:15:59 crc kubenswrapper[4760]: I1227 06:15:59.511280 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d335d87-9e3c-4826-bea2-ec9884fde6e0" path="/var/lib/kubelet/pods/6d335d87-9e3c-4826-bea2-ec9884fde6e0/volumes" Dec 27 06:16:08 crc kubenswrapper[4760]: I1227 06:16:08.366105 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zfw4n_db38a69b-db55-4829-8258-bf3da32477ac/control-plane-machine-set-operator/0.log" Dec 27 06:16:08 crc kubenswrapper[4760]: I1227 06:16:08.561217 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9wjz8_32612889-890b-4efb-a777-8ad13a778841/kube-rbac-proxy/0.log" Dec 27 06:16:08 crc kubenswrapper[4760]: I1227 06:16:08.649866 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9wjz8_32612889-890b-4efb-a777-8ad13a778841/machine-api-operator/0.log" Dec 27 06:16:09 crc kubenswrapper[4760]: I1227 06:16:09.502658 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:16:09 crc kubenswrapper[4760]: E1227 06:16:09.502946 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:16:12 crc kubenswrapper[4760]: I1227 06:16:12.045034 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-gjkm5"] Dec 27 06:16:12 crc kubenswrapper[4760]: I1227 06:16:12.051740 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-gjkm5"] Dec 27 06:16:13 crc kubenswrapper[4760]: I1227 06:16:13.511274 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c288547a-4460-45d1-aad0-c311b34e2a6c" path="/var/lib/kubelet/pods/c288547a-4460-45d1-aad0-c311b34e2a6c/volumes" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.396056 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9wgd"] Dec 27 06:16:16 crc kubenswrapper[4760]: E1227 06:16:16.396677 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ced29d4-ea1d-466a-bd87-816ec47775ef" containerName="collect-profiles" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.396690 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ced29d4-ea1d-466a-bd87-816ec47775ef" containerName="collect-profiles" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.396839 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ced29d4-ea1d-466a-bd87-816ec47775ef" containerName="collect-profiles" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.397992 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.406941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzj2\" (UniqueName: \"kubernetes.io/projected/bbe30641-3d44-4669-897d-9f6fcaf0a71c-kube-api-access-qvzj2\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.407160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-utilities\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.407358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-catalog-content\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.411475 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9wgd"] Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.508200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvzj2\" (UniqueName: \"kubernetes.io/projected/bbe30641-3d44-4669-897d-9f6fcaf0a71c-kube-api-access-qvzj2\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.508268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-utilities\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.508370 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-catalog-content\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.508839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-catalog-content\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.509438 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-utilities\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.527943 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvzj2\" (UniqueName: \"kubernetes.io/projected/bbe30641-3d44-4669-897d-9f6fcaf0a71c-kube-api-access-qvzj2\") pod \"redhat-operators-v9wgd\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:16 crc kubenswrapper[4760]: I1227 06:16:16.733228 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:17 crc kubenswrapper[4760]: I1227 06:16:17.041725 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-pws6r"] Dec 27 06:16:17 crc kubenswrapper[4760]: I1227 06:16:17.050374 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-pws6r"] Dec 27 06:16:17 crc kubenswrapper[4760]: I1227 06:16:17.168862 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9wgd"] Dec 27 06:16:17 crc kubenswrapper[4760]: I1227 06:16:17.512064 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9932da50-7e9f-48e5-a9a0-236cc084abb7" path="/var/lib/kubelet/pods/9932da50-7e9f-48e5-a9a0-236cc084abb7/volumes" Dec 27 06:16:18 crc kubenswrapper[4760]: I1227 06:16:18.156661 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerID="6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc" exitCode=0 Dec 27 06:16:18 crc kubenswrapper[4760]: I1227 06:16:18.156733 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerDied","Data":"6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc"} Dec 27 06:16:18 crc kubenswrapper[4760]: I1227 06:16:18.156766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerStarted","Data":"dbfa25f42893c547391ece7f9b80db53006935f43f8374445b2ee8ceb19d58e1"} Dec 27 06:16:19 crc kubenswrapper[4760]: I1227 06:16:19.168163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerStarted","Data":"f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603"} Dec 27 06:16:22 crc kubenswrapper[4760]: I1227 06:16:22.190630 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerID="f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603" exitCode=0 Dec 27 06:16:22 crc kubenswrapper[4760]: I1227 06:16:22.190726 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerDied","Data":"f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603"} Dec 27 06:16:23 crc kubenswrapper[4760]: I1227 06:16:23.344727 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-jps6t_842d0d88-e62c-4967-8863-013433d2218b/cert-manager-controller/0.log" Dec 27 06:16:23 crc kubenswrapper[4760]: I1227 06:16:23.503218 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:16:23 crc kubenswrapper[4760]: E1227 06:16:23.503395 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:16:23 crc kubenswrapper[4760]: I1227 06:16:23.544042 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-8wvq2_29ba29ce-2b1b-42a8-9107-dc4dfbf96ee7/cert-manager-cainjector/0.log" Dec 27 06:16:23 crc kubenswrapper[4760]: I1227 06:16:23.616903 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-vjwvm_39315eb5-bebe-48ac-80c7-b4ea6f02c508/cert-manager-webhook/0.log" Dec 27 06:16:24 crc kubenswrapper[4760]: I1227 06:16:24.205686 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerStarted","Data":"ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772"} Dec 27 06:16:24 crc kubenswrapper[4760]: I1227 06:16:24.226591 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9wgd" podStartSLOduration=2.575005369 podStartE2EDuration="8.226570606s" podCreationTimestamp="2025-12-27 06:16:16 +0000 UTC" firstStartedPulling="2025-12-27 06:16:18.158166608 +0000 UTC m=+1900.918235923" lastFinishedPulling="2025-12-27 06:16:23.809731845 +0000 UTC m=+1906.569801160" observedRunningTime="2025-12-27 06:16:24.220764884 +0000 UTC m=+1906.980834219" watchObservedRunningTime="2025-12-27 06:16:24.226570606 +0000 UTC m=+1906.986639941" Dec 27 06:16:26 crc kubenswrapper[4760]: I1227 06:16:26.734332 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:26 crc kubenswrapper[4760]: I1227 06:16:26.734680 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:27 crc kubenswrapper[4760]: I1227 06:16:27.790600 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9wgd" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="registry-server" probeResult="failure" output=< Dec 27 06:16:27 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Dec 27 06:16:27 crc kubenswrapper[4760]: > Dec 27 06:16:35 crc kubenswrapper[4760]: I1227 06:16:35.525213 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:16:35 crc kubenswrapper[4760]: E1227 06:16:35.526016 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:16:36 crc kubenswrapper[4760]: I1227 06:16:36.788451 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:36 crc kubenswrapper[4760]: I1227 06:16:36.847753 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:37 crc kubenswrapper[4760]: I1227 06:16:37.019451 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9wgd"] Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.084474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-vn8tw_0666407d-d4c1-497a-ae83-518e6ba70085/nmstate-console-plugin/0.log" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.277285 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-78cl5_ac16e936-081f-470c-a4a9-480fae986f2e/nmstate-handler/0.log" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.317964 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9wgd" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="registry-server" containerID="cri-o://ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772" gracePeriod=2 Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.421679 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-fvcbl_227bb17c-06ab-4906-8d95-b0146bf1868d/kube-rbac-proxy/0.log" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.436152 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-fvcbl_227bb17c-06ab-4906-8d95-b0146bf1868d/nmstate-metrics/0.log" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.555874 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-qcdnn_cb7c9b2c-f35f-42bf-8419-5c0615323e3a/nmstate-operator/0.log" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.727049 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.756180 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-fbhrm_4f66b8d6-242e-42ea-959d-304f51532744/nmstate-webhook/0.log" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.892263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-catalog-content\") pod \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.892304 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-utilities\") pod \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.892488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvzj2\" (UniqueName: \"kubernetes.io/projected/bbe30641-3d44-4669-897d-9f6fcaf0a71c-kube-api-access-qvzj2\") pod \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\" (UID: \"bbe30641-3d44-4669-897d-9f6fcaf0a71c\") " Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.893395 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-utilities" (OuterVolumeSpecName: "utilities") pod "bbe30641-3d44-4669-897d-9f6fcaf0a71c" (UID: "bbe30641-3d44-4669-897d-9f6fcaf0a71c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.900242 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe30641-3d44-4669-897d-9f6fcaf0a71c-kube-api-access-qvzj2" (OuterVolumeSpecName: "kube-api-access-qvzj2") pod "bbe30641-3d44-4669-897d-9f6fcaf0a71c" (UID: "bbe30641-3d44-4669-897d-9f6fcaf0a71c"). InnerVolumeSpecName "kube-api-access-qvzj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.993843 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvzj2\" (UniqueName: \"kubernetes.io/projected/bbe30641-3d44-4669-897d-9f6fcaf0a71c-kube-api-access-qvzj2\") on node \"crc\" DevicePath \"\"" Dec 27 06:16:38 crc kubenswrapper[4760]: I1227 06:16:38.993875 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.006856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbe30641-3d44-4669-897d-9f6fcaf0a71c" (UID: "bbe30641-3d44-4669-897d-9f6fcaf0a71c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.095439 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe30641-3d44-4669-897d-9f6fcaf0a71c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.325524 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerID="ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772" exitCode=0 Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.325559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerDied","Data":"ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772"} Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.325585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9wgd" event={"ID":"bbe30641-3d44-4669-897d-9f6fcaf0a71c","Type":"ContainerDied","Data":"dbfa25f42893c547391ece7f9b80db53006935f43f8374445b2ee8ceb19d58e1"} Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.325601 4760 scope.go:117] "RemoveContainer" containerID="ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.325702 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9wgd" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.366654 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9wgd"] Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.367525 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9wgd"] Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.379073 4760 scope.go:117] "RemoveContainer" containerID="f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.411883 4760 scope.go:117] "RemoveContainer" containerID="6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.446809 4760 scope.go:117] "RemoveContainer" containerID="ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772" Dec 27 06:16:39 crc kubenswrapper[4760]: E1227 06:16:39.447314 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772\": container with ID starting with ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772 not found: ID does not exist" containerID="ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.447352 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772"} err="failed to get container status \"ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772\": rpc error: code = NotFound desc = could not find container \"ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772\": container with ID starting with ef3cefc0310a657da8969ac17cae798bf92a6dea75b4fece28181dc3cdbd3772 not found: ID does not exist" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.447374 4760 scope.go:117] "RemoveContainer" containerID="f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603" Dec 27 06:16:39 crc kubenswrapper[4760]: E1227 06:16:39.447632 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603\": container with ID starting with f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603 not found: ID does not exist" containerID="f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.447656 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603"} err="failed to get container status \"f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603\": rpc error: code = NotFound desc = could not find container \"f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603\": container with ID starting with f633c6be094dd531f5dc760ea00bbee41b3e09d735fa275ec329e26f5a5a2603 not found: ID does not exist" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.447670 4760 scope.go:117] "RemoveContainer" containerID="6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc" Dec 27 06:16:39 crc kubenswrapper[4760]: E1227 06:16:39.447895 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc\": container with ID starting with 6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc not found: ID does not exist" containerID="6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.447916 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc"} err="failed to get container status \"6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc\": rpc error: code = NotFound desc = could not find container \"6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc\": container with ID starting with 6793c6044ce24a19fff548c1ad1373d26f501ab6b552fb71c1a130c3f309dccc not found: ID does not exist" Dec 27 06:16:39 crc kubenswrapper[4760]: I1227 06:16:39.510316 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" path="/var/lib/kubelet/pods/bbe30641-3d44-4669-897d-9f6fcaf0a71c/volumes" Dec 27 06:16:44 crc kubenswrapper[4760]: I1227 06:16:44.593683 4760 scope.go:117] "RemoveContainer" containerID="bad73730055928a71f66a5d77c61c785e71a4ee8586cb7d2ba91242c0dc02267" Dec 27 06:16:44 crc kubenswrapper[4760]: I1227 06:16:44.621182 4760 scope.go:117] "RemoveContainer" containerID="053d5e087d7b78a168164cb8be104e1ffb9a573d1f8eb0d4242386bfd5680e9f" Dec 27 06:16:44 crc kubenswrapper[4760]: I1227 06:16:44.649907 4760 scope.go:117] "RemoveContainer" containerID="9f8797c72f9d3f75ac55915efe640da116c17c66edfde2b0f6bbd177e3fb695d" Dec 27 06:16:48 crc kubenswrapper[4760]: I1227 06:16:48.502488 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:16:48 crc kubenswrapper[4760]: E1227 06:16:48.503127 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:16:54 crc kubenswrapper[4760]: I1227 06:16:54.941276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-sjdjf_a9d7a04b-5e48-421d-8f15-567afba65f65/controller/0.log" Dec 27 06:16:54 crc kubenswrapper[4760]: I1227 06:16:54.945064 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-sjdjf_a9d7a04b-5e48-421d-8f15-567afba65f65/kube-rbac-proxy/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.109013 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-frr-files/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.348593 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-reloader/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.359496 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-frr-files/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.368810 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-reloader/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.417398 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-metrics/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.806939 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-reloader/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.822576 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-metrics/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.823765 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-frr-files/0.log" Dec 27 06:16:55 crc kubenswrapper[4760]: I1227 06:16:55.838512 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-metrics/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.023657 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-metrics/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.046777 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-reloader/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.053516 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/cp-frr-files/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.062278 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/controller/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.285463 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/kube-rbac-proxy/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.285642 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/frr-metrics/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.358249 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/kube-rbac-proxy-frr/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.505720 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/frr/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.508605 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sjf4c_6c6dbd13-bfc2-4fde-9787-c1c4b2793736/reloader/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.606352 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-dmlkj_b4aff88c-89b8-47dd-a535-c806e81073ff/frr-k8s-webhook-server/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.690902 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5657944b46-gqznd_cc9fca43-72d1-4ee1-b32e-dda6d593659d/manager/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.775866 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-797b4b66d5-rcqt6_a365d3ec-e794-4f15-9e95-1c502f25d860/webhook-server/0.log" Dec 27 06:16:56 crc kubenswrapper[4760]: I1227 06:16:56.909914 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fj4vj_d4d0e743-0bef-49a7-9e0c-15c2e212944a/kube-rbac-proxy/0.log" Dec 27 06:16:57 crc kubenswrapper[4760]: I1227 06:16:57.032064 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fj4vj_d4d0e743-0bef-49a7-9e0c-15c2e212944a/speaker/0.log" Dec 27 06:16:59 crc kubenswrapper[4760]: I1227 06:16:59.503006 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:16:59 crc kubenswrapper[4760]: E1227 06:16:59.503519 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xhkgh_openshift-machine-config-operator(4817e744-ce93-48b6-8642-f3ae31d2db1b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" Dec 27 06:17:13 crc kubenswrapper[4760]: I1227 06:17:13.502336 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717" Dec 27 06:17:13 crc kubenswrapper[4760]: I1227 06:17:13.982315 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d4784c744-hzvm6_61704514-50fd-411e-8fb5-93bcf85fc4df/keystone-api/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.270383 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_5afbca79-d46f-48e9-82f8-2f676c4c7960/mysql-bootstrap/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.309597 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_6fc09772-cca9-4ff6-88ae-b66171f0745f/memcached/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.445531 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_5afbca79-d46f-48e9-82f8-2f676c4c7960/galera/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.529228 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_5afbca79-d46f-48e9-82f8-2f676c4c7960/mysql-bootstrap/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.577487 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_ed93a46f-1df6-4144-8487-08764749423a/mysql-bootstrap/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.587751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"3474d515fd56fd3240f266f710815bf5c096ca2dc078649cebed13766d48b76a"} Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.843585 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_ed93a46f-1df6-4144-8487-08764749423a/mysql-bootstrap/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.880276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_ed93a46f-1df6-4144-8487-08764749423a/galera/0.log" Dec 27 06:17:14 crc kubenswrapper[4760]: I1227 06:17:14.893637 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_1d85013d-1d0a-4d2a-8322-7fc10a3745b7/openstackclient/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.041122 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-84867d49bd-zw5cr_14954fd5-cd52-4027-a904-27bfb69d6c6d/placement-api/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.078049 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-84867d49bd-zw5cr_14954fd5-cd52-4027-a904-27bfb69d6c6d/placement-log/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.224362 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_9c61bc46-c657-4109-ae26-5f2d02fcce40/setup-container/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.429528 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_9c61bc46-c657-4109-ae26-5f2d02fcce40/setup-container/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.454346 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_51ff80ae-27ca-4914-8c80-008f6d2d0860/setup-container/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.499772 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_9c61bc46-c657-4109-ae26-5f2d02fcce40/rabbitmq/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.694586 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_51ff80ae-27ca-4914-8c80-008f6d2d0860/setup-container/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.711437 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_51ff80ae-27ca-4914-8c80-008f6d2d0860/rabbitmq/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.758782 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_88c1643d-b9a1-49ac-aff2-39bff3918b3e/setup-container/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.942956 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_88c1643d-b9a1-49ac-aff2-39bff3918b3e/setup-container/0.log" Dec 27 06:17:15 crc kubenswrapper[4760]: I1227 06:17:15.973419 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_88c1643d-b9a1-49ac-aff2-39bff3918b3e/rabbitmq/0.log" Dec 27 06:17:30 crc kubenswrapper[4760]: I1227 06:17:30.819610 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/util/0.log" Dec 27 06:17:30 crc kubenswrapper[4760]: I1227 06:17:30.989784 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/util/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.021918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/pull/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.064986 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/pull/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.254265 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/extract/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.273349 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/util/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.326322 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931acwm7m_3a4c9a79-f9a4-4e1b-90dc-05ab9d98dccf/pull/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.475186 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/util/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.643424 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/pull/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.667286 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/util/0.log" Dec 27 06:17:31 crc kubenswrapper[4760]: I1227 06:17:31.691111 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/pull/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.011657 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/util/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.057730 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/pull/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.059216 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4smqld_9015155d-bc63-47e7-8f74-ccf0e122b05f/extract/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.172743 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/util/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.397964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/pull/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.418184 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/pull/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.434011 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/util/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.605180 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/extract/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.632390 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/util/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.664774 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8ssv5m_030247e7-bd9a-4c58-8c3c-aad08db6895d/pull/0.log" Dec 27 06:17:32 crc kubenswrapper[4760]: I1227 06:17:32.775837 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/extract-utilities/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.041309 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/extract-utilities/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.051889 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/extract-content/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.073106 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/extract-content/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.219861 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/extract-content/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.232727 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/extract-utilities/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.537199 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5xqc_8924ab71-d3e4-4709-a666-c70f96fe55a7/registry-server/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.546823 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/extract-utilities/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.722628 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/extract-utilities/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.738539 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/extract-content/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.747380 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/extract-content/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.900308 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/extract-utilities/0.log" Dec 27 06:17:33 crc kubenswrapper[4760]: I1227 06:17:33.931018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/extract-content/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.162367 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zhrlh_a8049c10-25cf-46ee-b24a-42b5e5af0d6a/marketplace-operator/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.178123 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x6kjl_ddd392d2-7dda-4bed-bdf2-368682cb3c71/registry-server/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.220492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/extract-utilities/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.398453 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/extract-utilities/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.421723 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/extract-content/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.445513 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/extract-content/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.630970 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/extract-utilities/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.640731 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/extract-content/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.712526 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-75pzc_c33de9e2-91e8-425a-8250-5301c5aef450/registry-server/0.log" Dec 27 06:17:34 crc kubenswrapper[4760]: I1227 06:17:34.817166 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/extract-utilities/0.log" Dec 27 06:17:35 crc kubenswrapper[4760]: I1227 06:17:35.004981 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/extract-content/0.log" Dec 27 06:17:35 crc kubenswrapper[4760]: I1227 06:17:35.037793 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/extract-content/0.log" Dec 27 06:17:35 crc kubenswrapper[4760]: I1227 06:17:35.063866 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/extract-utilities/0.log" Dec 27 06:17:35 crc kubenswrapper[4760]: I1227 06:17:35.210987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/extract-content/0.log" Dec 27 06:17:35 crc kubenswrapper[4760]: I1227 06:17:35.218884 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/extract-utilities/0.log" Dec 27 06:17:35 crc kubenswrapper[4760]: I1227 06:17:35.589263 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9hbs_9c6fd0b7-8355-48f8-bc7b-80168772194d/registry-server/0.log" Dec 27 06:18:50 crc kubenswrapper[4760]: I1227 06:18:50.354858 4760 generic.go:334] "Generic (PLEG): container finished" podID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerID="2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e" exitCode=0 Dec 27 06:18:50 crc kubenswrapper[4760]: I1227 06:18:50.354972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" event={"ID":"a42ba0ed-110f-4302-a1b5-6988339a1039","Type":"ContainerDied","Data":"2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e"} Dec 27 06:18:50 crc kubenswrapper[4760]: I1227 06:18:50.356366 4760 scope.go:117] "RemoveContainer" containerID="2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e" Dec 27 06:18:50 crc kubenswrapper[4760]: I1227 06:18:50.768431 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpjcg_must-gather-sqm6s_a42ba0ed-110f-4302-a1b5-6988339a1039/gather/0.log" Dec 27 06:18:57 crc kubenswrapper[4760]: I1227 06:18:57.891376 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kpjcg/must-gather-sqm6s"] Dec 27 06:18:57 crc kubenswrapper[4760]: I1227 06:18:57.892208 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="copy" containerID="cri-o://7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e" gracePeriod=2 Dec 27 06:18:57 crc kubenswrapper[4760]: I1227 06:18:57.902581 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kpjcg/must-gather-sqm6s"] Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.272283 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpjcg_must-gather-sqm6s_a42ba0ed-110f-4302-a1b5-6988339a1039/copy/0.log" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.272827 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.415464 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpjcg_must-gather-sqm6s_a42ba0ed-110f-4302-a1b5-6988339a1039/copy/0.log" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.416269 4760 generic.go:334] "Generic (PLEG): container finished" podID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerID="7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e" exitCode=143 Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.416327 4760 scope.go:117] "RemoveContainer" containerID="7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.416361 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpjcg/must-gather-sqm6s" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.420072 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42ba0ed-110f-4302-a1b5-6988339a1039-must-gather-output\") pod \"a42ba0ed-110f-4302-a1b5-6988339a1039\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.420223 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qctg9\" (UniqueName: \"kubernetes.io/projected/a42ba0ed-110f-4302-a1b5-6988339a1039-kube-api-access-qctg9\") pod \"a42ba0ed-110f-4302-a1b5-6988339a1039\" (UID: \"a42ba0ed-110f-4302-a1b5-6988339a1039\") " Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.432024 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42ba0ed-110f-4302-a1b5-6988339a1039-kube-api-access-qctg9" (OuterVolumeSpecName: "kube-api-access-qctg9") pod "a42ba0ed-110f-4302-a1b5-6988339a1039" (UID: "a42ba0ed-110f-4302-a1b5-6988339a1039"). InnerVolumeSpecName "kube-api-access-qctg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.434661 4760 scope.go:117] "RemoveContainer" containerID="2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.522453 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qctg9\" (UniqueName: \"kubernetes.io/projected/a42ba0ed-110f-4302-a1b5-6988339a1039-kube-api-access-qctg9\") on node \"crc\" DevicePath \"\"" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.528071 4760 scope.go:117] "RemoveContainer" containerID="7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e" Dec 27 06:18:58 crc kubenswrapper[4760]: E1227 06:18:58.528806 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e\": container with ID starting with 7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e not found: ID does not exist" containerID="7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.528869 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e"} err="failed to get container status \"7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e\": rpc error: code = NotFound desc = could not find container \"7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e\": container with ID starting with 7e98db6db2b9c4c8892ff54cd3172194e832162bf057eadec9b453f346e7116e not found: ID does not exist" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.528917 4760 scope.go:117] "RemoveContainer" containerID="2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e" Dec 27 06:18:58 crc kubenswrapper[4760]: E1227 06:18:58.529326 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e\": container with ID starting with 2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e not found: ID does not exist" containerID="2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.529367 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e"} err="failed to get container status \"2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e\": rpc error: code = NotFound desc = could not find container \"2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e\": container with ID starting with 2adc3ed54b4cff5724900f0dff3f273ada4fcbe9d9016e4e62cf9a1813bfd02e not found: ID does not exist" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.532869 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a42ba0ed-110f-4302-a1b5-6988339a1039-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a42ba0ed-110f-4302-a1b5-6988339a1039" (UID: "a42ba0ed-110f-4302-a1b5-6988339a1039"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:18:58 crc kubenswrapper[4760]: I1227 06:18:58.623737 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42ba0ed-110f-4302-a1b5-6988339a1039-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 27 06:18:59 crc kubenswrapper[4760]: I1227 06:18:59.511333 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" path="/var/lib/kubelet/pods/a42ba0ed-110f-4302-a1b5-6988339a1039/volumes" Dec 27 06:19:35 crc kubenswrapper[4760]: I1227 06:19:35.288397 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:19:35 crc kubenswrapper[4760]: I1227 06:19:35.289189 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.686764 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jvrp4"] Dec 27 06:19:40 crc kubenswrapper[4760]: E1227 06:19:40.687679 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="copy" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.687708 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="copy" Dec 27 06:19:40 crc kubenswrapper[4760]: E1227 06:19:40.687746 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="gather" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.687765 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="gather" Dec 27 06:19:40 crc kubenswrapper[4760]: E1227 06:19:40.687818 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="registry-server" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.687835 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="registry-server" Dec 27 06:19:40 crc kubenswrapper[4760]: E1227 06:19:40.687857 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="extract-utilities" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.687873 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="extract-utilities" Dec 27 06:19:40 crc kubenswrapper[4760]: E1227 06:19:40.687895 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="extract-content" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.687910 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="extract-content" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.688322 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe30641-3d44-4669-897d-9f6fcaf0a71c" containerName="registry-server" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.688360 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="gather" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.688396 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42ba0ed-110f-4302-a1b5-6988339a1039" containerName="copy" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.691062 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.707418 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvrp4"] Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.824385 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw7mw\" (UniqueName: \"kubernetes.io/projected/511ff647-6db1-45ab-83d0-b543fd723d46-kube-api-access-qw7mw\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.824441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-utilities\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.824488 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-catalog-content\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.925909 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw7mw\" (UniqueName: \"kubernetes.io/projected/511ff647-6db1-45ab-83d0-b543fd723d46-kube-api-access-qw7mw\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.925976 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-utilities\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.926051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-catalog-content\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.926691 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-catalog-content\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.927405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-utilities\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:40 crc kubenswrapper[4760]: I1227 06:19:40.967442 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw7mw\" (UniqueName: \"kubernetes.io/projected/511ff647-6db1-45ab-83d0-b543fd723d46-kube-api-access-qw7mw\") pod \"redhat-marketplace-jvrp4\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:41 crc kubenswrapper[4760]: I1227 06:19:41.018632 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:41 crc kubenswrapper[4760]: I1227 06:19:41.296164 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvrp4"] Dec 27 06:19:41 crc kubenswrapper[4760]: I1227 06:19:41.752818 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerStarted","Data":"b46ac158ce2c650b5d085cd6a911857a7aed1f02ea56f6bcdd26c02a0ca2958c"} Dec 27 06:19:43 crc kubenswrapper[4760]: I1227 06:19:43.769603 4760 generic.go:334] "Generic (PLEG): container finished" podID="511ff647-6db1-45ab-83d0-b543fd723d46" containerID="e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd" exitCode=0 Dec 27 06:19:43 crc kubenswrapper[4760]: I1227 06:19:43.769832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerDied","Data":"e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd"} Dec 27 06:19:43 crc kubenswrapper[4760]: I1227 06:19:43.773005 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 27 06:19:44 crc kubenswrapper[4760]: I1227 06:19:44.779639 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerStarted","Data":"6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9"} Dec 27 06:19:45 crc kubenswrapper[4760]: I1227 06:19:45.791413 4760 generic.go:334] "Generic (PLEG): container finished" podID="511ff647-6db1-45ab-83d0-b543fd723d46" containerID="6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9" exitCode=0 Dec 27 06:19:45 crc kubenswrapper[4760]: I1227 06:19:45.791485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerDied","Data":"6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9"} Dec 27 06:19:46 crc kubenswrapper[4760]: I1227 06:19:46.832330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerStarted","Data":"acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1"} Dec 27 06:19:46 crc kubenswrapper[4760]: I1227 06:19:46.854278 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jvrp4" podStartSLOduration=4.239559369 podStartE2EDuration="6.854261617s" podCreationTimestamp="2025-12-27 06:19:40 +0000 UTC" firstStartedPulling="2025-12-27 06:19:43.772589704 +0000 UTC m=+2106.532659039" lastFinishedPulling="2025-12-27 06:19:46.387291962 +0000 UTC m=+2109.147361287" observedRunningTime="2025-12-27 06:19:46.848576569 +0000 UTC m=+2109.608645884" watchObservedRunningTime="2025-12-27 06:19:46.854261617 +0000 UTC m=+2109.614330922" Dec 27 06:19:51 crc kubenswrapper[4760]: I1227 06:19:51.019483 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:51 crc kubenswrapper[4760]: I1227 06:19:51.020204 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:51 crc kubenswrapper[4760]: I1227 06:19:51.061149 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:51 crc kubenswrapper[4760]: I1227 06:19:51.934810 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:51 crc kubenswrapper[4760]: I1227 06:19:51.999197 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvrp4"] Dec 27 06:19:53 crc kubenswrapper[4760]: I1227 06:19:53.891800 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jvrp4" podUID="511ff647-6db1-45ab-83d0-b543fd723d46" containerName="registry-server" containerID="cri-o://acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1" gracePeriod=2 Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.539523 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.633567 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw7mw\" (UniqueName: \"kubernetes.io/projected/511ff647-6db1-45ab-83d0-b543fd723d46-kube-api-access-qw7mw\") pod \"511ff647-6db1-45ab-83d0-b543fd723d46\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.633681 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-catalog-content\") pod \"511ff647-6db1-45ab-83d0-b543fd723d46\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.633713 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-utilities\") pod \"511ff647-6db1-45ab-83d0-b543fd723d46\" (UID: \"511ff647-6db1-45ab-83d0-b543fd723d46\") " Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.634893 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-utilities" (OuterVolumeSpecName: "utilities") pod "511ff647-6db1-45ab-83d0-b543fd723d46" (UID: "511ff647-6db1-45ab-83d0-b543fd723d46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.643380 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511ff647-6db1-45ab-83d0-b543fd723d46-kube-api-access-qw7mw" (OuterVolumeSpecName: "kube-api-access-qw7mw") pod "511ff647-6db1-45ab-83d0-b543fd723d46" (UID: "511ff647-6db1-45ab-83d0-b543fd723d46"). InnerVolumeSpecName "kube-api-access-qw7mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.654650 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "511ff647-6db1-45ab-83d0-b543fd723d46" (UID: "511ff647-6db1-45ab-83d0-b543fd723d46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.735314 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw7mw\" (UniqueName: \"kubernetes.io/projected/511ff647-6db1-45ab-83d0-b543fd723d46-kube-api-access-qw7mw\") on node \"crc\" DevicePath \"\"" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.735835 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.735922 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511ff647-6db1-45ab-83d0-b543fd723d46-utilities\") on node \"crc\" DevicePath \"\"" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.902992 4760 generic.go:334] "Generic (PLEG): container finished" podID="511ff647-6db1-45ab-83d0-b543fd723d46" containerID="acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1" exitCode=0 Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.903032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerDied","Data":"acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1"} Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.903060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvrp4" event={"ID":"511ff647-6db1-45ab-83d0-b543fd723d46","Type":"ContainerDied","Data":"b46ac158ce2c650b5d085cd6a911857a7aed1f02ea56f6bcdd26c02a0ca2958c"} Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.903077 4760 scope.go:117] "RemoveContainer" containerID="acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.903091 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvrp4" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.924996 4760 scope.go:117] "RemoveContainer" containerID="6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9" Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.937653 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvrp4"] Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.950911 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvrp4"] Dec 27 06:19:54 crc kubenswrapper[4760]: I1227 06:19:54.990879 4760 scope.go:117] "RemoveContainer" containerID="e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.007662 4760 scope.go:117] "RemoveContainer" containerID="acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1" Dec 27 06:19:55 crc kubenswrapper[4760]: E1227 06:19:55.008044 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1\": container with ID starting with acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1 not found: ID does not exist" containerID="acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.008075 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1"} err="failed to get container status \"acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1\": rpc error: code = NotFound desc = could not find container \"acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1\": container with ID starting with acb6ec1add6f728f54164ba0da73ff47e3dfbcb1f910ecd0492b1fa5604b2ba1 not found: ID does not exist" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.008112 4760 scope.go:117] "RemoveContainer" containerID="6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9" Dec 27 06:19:55 crc kubenswrapper[4760]: E1227 06:19:55.008326 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9\": container with ID starting with 6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9 not found: ID does not exist" containerID="6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.008350 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9"} err="failed to get container status \"6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9\": rpc error: code = NotFound desc = could not find container \"6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9\": container with ID starting with 6e7fb7e5eec36896047bd84033dbf6b7b47f232adc6b7926fc2c1692cf6e23b9 not found: ID does not exist" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.008364 4760 scope.go:117] "RemoveContainer" containerID="e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd" Dec 27 06:19:55 crc kubenswrapper[4760]: E1227 06:19:55.008677 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd\": container with ID starting with e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd not found: ID does not exist" containerID="e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.008754 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd"} err="failed to get container status \"e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd\": rpc error: code = NotFound desc = could not find container \"e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd\": container with ID starting with e81617bb07c295710adc19868ea009ea216c9963a35adf2e5c5281dbb5e45dcd not found: ID does not exist" Dec 27 06:19:55 crc kubenswrapper[4760]: I1227 06:19:55.532318 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511ff647-6db1-45ab-83d0-b543fd723d46" path="/var/lib/kubelet/pods/511ff647-6db1-45ab-83d0-b543fd723d46/volumes" Dec 27 06:20:05 crc kubenswrapper[4760]: I1227 06:20:05.287451 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:20:05 crc kubenswrapper[4760]: I1227 06:20:05.288244 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:20:35 crc kubenswrapper[4760]: I1227 06:20:35.287678 4760 patch_prober.go:28] interesting pod/machine-config-daemon-xhkgh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 27 06:20:35 crc kubenswrapper[4760]: I1227 06:20:35.288521 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 27 06:20:35 crc kubenswrapper[4760]: I1227 06:20:35.288608 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" Dec 27 06:20:35 crc kubenswrapper[4760]: I1227 06:20:35.289769 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3474d515fd56fd3240f266f710815bf5c096ca2dc078649cebed13766d48b76a"} pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 27 06:20:35 crc kubenswrapper[4760]: I1227 06:20:35.289888 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" podUID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerName="machine-config-daemon" containerID="cri-o://3474d515fd56fd3240f266f710815bf5c096ca2dc078649cebed13766d48b76a" gracePeriod=600 Dec 27 06:20:36 crc kubenswrapper[4760]: I1227 06:20:36.414716 4760 generic.go:334] "Generic (PLEG): container finished" podID="4817e744-ce93-48b6-8642-f3ae31d2db1b" containerID="3474d515fd56fd3240f266f710815bf5c096ca2dc078649cebed13766d48b76a" exitCode=0 Dec 27 06:20:36 crc kubenswrapper[4760]: I1227 06:20:36.415056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerDied","Data":"3474d515fd56fd3240f266f710815bf5c096ca2dc078649cebed13766d48b76a"} Dec 27 06:20:36 crc kubenswrapper[4760]: I1227 06:20:36.415362 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xhkgh" event={"ID":"4817e744-ce93-48b6-8642-f3ae31d2db1b","Type":"ContainerStarted","Data":"acb316222c3fb850452b4a9ec83d9a6650039b9fe7353c900f85d4c5b76f1289"} Dec 27 06:20:36 crc kubenswrapper[4760]: I1227 06:20:36.415388 4760 scope.go:117] "RemoveContainer" containerID="25c3028b509deb7dc508ea534a79251bc70305eee02397e5347e25c9bd519717"